370 likes | 409 Views
This presentation explores the features of the next generation of the Oracle Database and its impact on DBAs. Topics include multi-tenancy, SQL plan management, data optimization, data movement, and policy-based redaction.
E N D
An Exploration of the Next Generation of the Oracle DatabaseSession #134 Jim Czuprynski Zero Defect Computing, Inc. April 11, 2013
My Credentials • 30+ years of database-centric IT experience • Oracle DBA since 2001 • Oracle 9i, 10g, 11g OCP • ~ 100 articles on databasejournal.com and ioug.org • Teach core Oracle DBA courses (G/I+RAC, Exadata, Performance Tuning, Data Guard) • 2009: Oracle Education Partner Trainer of the Year • Speaker at Oracle OpenWorld, IOUG COLLABORATE11, and OUG Norway 2013 • Oracle-centric blog (Generally, It Depends)
Disclaimer ALL FEATURES DESCRIBED IN THIS PRESENTATION – EVEN THOSE THAT MAY EVENTUALLY COME TO FRUITION – ARE ENTIRELY THE OPINIONS OF THE PRESENTER. THEY MAY OR MAY NOT REPRESENT ANY ACTUAL FEATURES OF THE NEXT RELEASE OF THE ORACLE DATABASE. ANY ACCIDENTAL ASSOCIATION OF THE TWELFTH CARDINAL NUMBER AND THE THIRD LETTER OF THE LATIN ALPHABET IS PURELY COINCIDENTAL. DUE TO ITS CONTENT THIS PRESENTATION SHOULD NOT BE VIEWED BY ANYONE
Our Agenda • DBA 3.0: Consolidate or Perish! • Multi-Tenancy Databases: CDBs and PDBs • SQL Plan Management Upgrades • Information Lifecycle Management • Automatic Data Optimization • Online Datafile and Partition Movement • Policy-Based Automatic Redaction • Q+A
DBA 3.0: Consolidate or Perish! You’ll just have to do more with less. - Our CIOs, 2013 Less is more.– Ludwig Mies van der Rohe, c.1950 Engineered systems are the new 800-pound gorillas “Testing in production” is considered blasphemy …but DBAs must be able to guarantee excellent application performance beforerollout of new releases • Resource consolidation is the new reality • Earlier releases used instance caging, DBRM I/O management, and Exadata IORM to enforce it • New database release offers several excellent, intrinsic consolidation features
Multi-Tenancy: CDBs and PDBs The next database release offers a completely new multi-tenancy architecture for databases and instances: • A Container Database (CDBs) comprises one or more Pluggable Databases (PDBs) • CDBs are databases that contain common elements shared with PDBs • PDBs comparable to traditional databases in prior releases … • …but PDBs offer extreme flexibility for cloning, upgrading, and application workload localization
CDBs and Common Objects CDBs and PDBs share common objects CDB1 • A CDB owns in common: • Control files and SPFILE • Online and archived redo logs • Backup sets and image copies • Each CDB has one SYSTEM, SYSAUX, UNDO, and TEMP tablespace • Oracle-supplied data dictionary objects, users, and roles are shared globally between CDB and all PDBs Data Dictionary Roles Users PDB3 PDB1 PDB2 SPFILE SYSAUX SYSTEM UNDOTBS1 Control Files TEMP ORLs ARLs Backups Image Copies
PDBs and Local Objects CDB1 PDBs also own local objects • PDBs have a local SYSTEM and SYSAUX tablespace • PDBs may have their own local TEMP tablespace • PDBs can own one or more application schemas: • Local tablespaces • Local users and roles • PDBs own all application objects within their schemas By default, PDBs can only see their own objects AP MFG HR MFG_ROLE AP_ROLE HR_ROLE PDB3 PDB1 PDB2 SYSTEM SYSTEM SYSTEM SYSAUX SYSAUX SYSAUX TEMP TEMP TEMP MFG_DATA AP_DATA HR_DATA
Shared Memory and Processes DW and DSS BATCH + IDL CDBs and PDBs also share common memory and background processes • All PDBs share same SGA and PGA • All PDBs share same background processes • OLTP: Intense random reads and writes (DBWn and LGWR) • DW/DSS: Intense sequential reads and/or logicaI I/O • Batch and Data Loading: Intense sequential physical reads and physical writes OLTP CDB1 SGA & PGA Others LGWR DBWn PDB3 PDB1 PDB2 System Storage
Sharing: It’s a Good Thing! Sharing common resources - when it makes sense - tends to reduce contention as well as needless resource over-allocation: • Not all PDBs demand high CPU cycles • Not all PDBs have same memory demands • Not all PDBs have same I/O bandwidth needs • DSS/DW: MBPS • OLTP: IOPS and Latency Result: More instances with less hardware
PDBs: Ultra-Fast Provisioning Four ways to provision PDBs: • Clone from PDB$SEED • Clone from existing PDB • “Replugging” previously “unplugged” PDB • Plug in non-CDB as new PDB CDB and PDBs stay alive during any of these operations! CDB1 PDB$SEED PDB1 PDB3 PDB5 11gR2DB PDB2 PDB4
XStream: Extreme Replication As of this new release, Oracle Streams is deprecated in favor of XStream • XStream Outbound Server handles CDC (change data capture) sourced directly from PDB(s) committed transactions • XStream Inbound Server(s) replicates CDC to any PDB(s) (but never to a CDB) • Because it’s based on Oracle GoldenGate, XStreammay require licensing of additional OGG features
SQL on Autopilot: Adaptive SQL Efficiency, Performance, and Tuning
Adaptive SQL Plan Management Automatic Plan Evolution (APE) now available via package DBMS_SPM • By default, a new automatic task runs during regular maintenance window • Auto-evolution of all non-accepted plans(NAPs): • Most recentlyadded plansget precedence • NAPs that still perform poorly: Wait 30 days • Any NAPs that perform better are automatically enabled • New SPM report procedure shows results of Automatic Plan Evolution
SPM Evolve Advisor In prior releases: • All SQL Plan evolution had to be performed manually • Gathering SPM advice on whether evolution was even possible was a manual process In this new release: • Automatic SQL Plan Evolutiontasks included • Manual advice and implementation also supported via new DBMS_SPM procedures • Warning! Tuning Pack licensing may be required
Cardinality Feedback Cardinality Feedback was introduced in Oracle 11gR2 as part of Adaptive Cursor Sharing: • Captures actual execution statistics during query execution • Compares expected vs. actualcardinality during first execution of query • During second execution, optimizer uses actual execution statistics to reparse statement’s plan • Works best for non-skewed row sources with limited volatility
Adaptive Execution Plans (AEP) The optimizer can now adaptively recognize and capture multiplepotential execution sub-plans within an existing execution plan: • AEP constructs dynamic plans automatically • AEP dynamic statistics collector buffers each row set • If a new row count exceeds prior counts during statement execution, the optimizer will choose an alternative favored subplan (e.g. HASH JOIN instead of NESTED LOOP) • Otherwise, AEP will utilize the original sub-plan • Largest AEP benefit: Sub-plans whose row sets contain dramatically skeweddata
Automatic Re-Optimization (ARO) For some statements, ARO features may help to overcome intrinsic limitations of AEP dynamic plans: • The optimizer discovers an inefficiency during a statement’s first execution that AEP cannot resolve (e.g. order in which row sets are joined) • During the next execution, the optimizer gathers additional statistics to improve the join order • All subsequent executions of the same statement improve as more execution statistics and optimizer statistics are gathered
SQL Plan Directives The latest release offers the capability to capture and retain compilation and execution statistics within the data dictionary: • Before, a statement’s compilation and execution statistics were retained onlywithin the Shared Pool • Now these statistics will be retained within the data dictionaryinstead as SQL Plan Directives (SPDs) • SPDs are not SQL statement specific! • They pertain to best methods to process row sets • Therefore, multiple future queries may benefit • DBMS_XPLAN.DISPLAY … +NOTES tells if an SPD has been used against an existing SQL statement • New data dictionary views capture SPD metadata
Information Lifecycle Management (ILM) and Automatic Data Optimization (ADO)
Automatic Data Optimization (ADO) ADO offers capability to move and/or compress data based on observed usage patterns • Uses heat maps to determine how often data has been accessed • Tracks exactly how data has been utilized (DML vs. query, single-block vs. table scan) • Data usage patterns can be tracked at tablespace, segment, and row level
Heat Maps: “How Hot Is It?” ADO leverages heat maps to: • Capture data usage frequencies • Determine which compression level is most appropriate for how data is being used • Determine which data could be movedfrom a high-performance storage tier to a lower-performance tier • Decide when data should be moved between different tablespacesor partitions to limit possible out-of-space conditions
Heat Maps: An Example Heat map shows heavy DML and queries: Leave data uncompressed After 3 days of more limited access: Enable ADVANCED compression After 30 days of only limited access: Enable HCC QUERY LOW* compression After 90 days of no access: Enable HCC ARCHIVE HIGH* compression * Will probably require Exadata, ZFS Appliance, or Pillar Axiom storage
Information Lifecycle Management Information Lifecycle Management (ILM): • Offers ability to track effectiveness of ADO policies • Measures how much data has effectively been migrated to different storage levels based on ADO policies currently in place • Avoids incrementing ILM activity when normal maintenance tasks (e.g. gathering optimizer statistics) should be safely ignored
In-Database Archiving (IDA) Avoids unnecessary deletion of rows when they no longer contain valid data • Activated via new ROW ARCHIVAL attribute of data segment • During initial INSERT, each row’s state is set to default value of zero (0) in ORA_ARCHIVE_STATE hidden column • Rows can be marked as inactive by setting ORA_ARCHIVE_STATE to one (1) • Unless the ORA_ ARCHIVE_STATE column is mentioned in query, a row’s IDA status is invisible and only active rows will be returned to query • Inactive rows can be compressed!
Temporal Validity (TV) TV allows specification of time periods when data stored within a table will be actually considered “valid” • Implemented through new PERIOD FOR table attribute • Specified TV dimension can then be populated as desired with both time-valid and non-time-valid data • Non-time-valid data can be compressed until it’s no longer needed (or until needed as well!)
Online Move Datafile (OMD) Engineered Traditional Online Move Datafile (OMD) offers the ability to: • Move anydatafileto other storage system without first offlining it • Move any datafilefrom non-ASM to ASM storage • Move any datafile to a different compression level (e.g. uncompressed to OLTP or HCC) Implications: • Data objects within the datafile’s tablespace are always accessible • DML and DDL against those data objects are never interrupted SAN EXT3GFSNTFS ASM Uncompressed or Advanced Compression Advanced or HCC Compression
Online Move Partition (OMP) Online Move Partition (OMP) offers the ability to: • Move, split, or merge partitioned objects without interrupting DML against data objects within those partitions • Migrate partitions from one compression level to another • Global and local indexes are still maintained .. all without interrupting DML against most data objects within those partitions P1 P1 P2 P2 P3 P3 P1 P1 P2 P2 P3 P3
Policy-Based Data Redaction Oracle Data Redaction (ODR) offers ability to: • Partially obscure or completely hide sensitive data • Implement recognized standards (PII, PHI, PCI) for confidential data • “Anonymize” data for development or QA purposes • Allow selected users to see all or just some of data values based on custom viewing policies • No changes to existing application code are required
ODR: Redaction Policies Redaction policies: • Are implemented via new package (think: DBMS_FGA, DBMS_RLS) • Are only applied to outbound queried data • Are applied to one or more columns in a table or view • Can be joinedtogether logically to provide a comprehensive redaction strategy
ODR: Redaction Types Redaction types enable ODR to: • Completely hide data (e.g. returned as empty string) • Partially obscure data (e.g. show only last 4 digits of SSN) • Completely obscure data by returning “anonymized” data values of same size / length • Provide custom obfuscation for specific data using regular expressions that vary conditionally
ODR: Best Practices For optimal effectiveness: • Grant access to ODR procedures sparingly • Minimize the number of columns redacted • Consider whitelisting instead of blacklisting • Regular expressions take more resources to implement redaction than any other method, so use them sparingly
Thank You For Your Kind Attention Please feel free to evaluate this session: http://www.ioug.org/eval • Session #134 • An Exploration of the Next Generationof the Oracle Database • If you have any questions or comments, feel free to: • E-mail me at jczuprynski@zerodefectcomputingcom • Follow my blog (Generally, It Depends): http://jimczuprynski.wordpress.com • Follow me on Twitter (@jczuprynski) • Connect with me on LinkedIn (Jim Czuprynski)