190 likes | 413 Views
Gagan Singh Intel Corporation Subhadra Sampathkumaran Intel Corporation James Harding Oracle America. Consolidating Databases on Oracle Exadata: Key Learnings at Intel . Agenda. Intel – Database Environment Overview Legacy Environment overview & Configuration
E N D
Gagan Singh • Intel Corporation • Subhadra Sampathkumaran • IntelCorporation • James Harding • Oracle America Consolidating Databases on Oracle Exadata: Key Learnings at Intel
Agenda • Intel – Database Environment Overview • Legacy Environment overview & Configuration • Limitations of Legacy Architecture • Goals of Migration to Exadata • Proof of Concept • ExadataSolution Architecture • Value • Key Learnings & Challenges • Summary
Intel – Database Environment Overview • Highly automated factories with 100’s of complex integrated systems • Goals include -Yield analysis, process improvement, failure mode analysis and test time reduction • Geographically distributed independent systems • Monitoring and Availability is key • 24 x 7 uptime • Strict reporting SLA’s • Support (DSS) and OLTP Setup
Legacy Environment Overview • DB size ranges from a few GB’s to ~ 80 TB per site (6 month retention) • Large data growth projected. • Reliability, Availability and Performance - High priority • Application tier includes 3rd party products and in-house apps • Robust Backup and Recovery though lacks performance • Long MTTR for Disaster Recovery • Monitoring – Oracle Grid control 10.2.x
Legacy Configuration • Each site hosts independent RAC with SAN storage. • Database: Oracle10.2.x Windows2003 x64 • GigE with Jumbo Frames for interconnect • >80 TB on ASM External Redundancy • RMAN incrementally merged backup to FRA Tape • Application tier – OCI, ODP.Net, OLEDB, ODBC EMC DMX-414 Racks
Exadata Proof of Concept Requirements • Run Data Warehouse queries with atleast 2x improvement • Run OLTP Read (small queries) and Read-Write (loader) workload with 40% performance improvement • Achieve a 5x reduction in data size with the use of Advanced and Hybrid Columnar Compression • Demonstrate backup and restore improvement of 2x • Meet or beat current RAP targets as defined by Intel
Executing The Proof Of Concept Less Risk, Better Results • Validating the success criteria • Performance: Data Warehouse Queries, Data Loaders • Data Compression • Backup/Recovery w/ZFS Storage Appliance • Reliability, Availability and Performance • Exadata/ZFS pre-delivery process • Exadata/ZFS Delivery • Data Migration • Execute test plan and capture data Ready- to-Run
Exadata- Performance • RMAN Backup • Disk backup increased to 8 TB/Hour • Restore Time7.2 TB/Hour • Write back Flash Enabled • Availability • Application not impacted under various failure scenarios • Storage reduction • Achieved by index reduction and data compression • Query Performance • 2x Improvement • Application tuned to leverage 11.2 DB features • Compression • Up to 10x HCC compression • 5x typical • Current data using OLTP compression • Archival Data – HCC (Query High) • Data Loading Times • Faster by 40%
-contd • Resource Management • DBRM & IORM configured • High, medium and low consumer groups defined • Resource limits for CPU, IO and parallelism configured for each group • Users categorized into consumer groups based on services used to connect • Monitoring & tuning • Oracle 12c Grid Control • SQL Monitoring used extensively for tuning queries • Efficient Setup and startup time • HW/OS/Storage/DB Setup to Best Practices • ~70% less time than conventional infrastructure startup • Support • Leverage Platinum support • ~50% reduction in Infrastructure+DBA operational calls
Application Changes – Key Learnings • Leverage 11g R2 features • Tuning queries from 10g to 11g – Used 12c SQL monitoring • Changes for Exadata • Smart Scan, compression (OLTP for most recent data and HCC for older data) • Minimize indexes to enable offload to storage • Optimizer • Parallelization & partitioning • Globalization with GMT • Date/time values stored in TIMESTAMP WITH LOCALTIMEZONE columns • Date/time retrieved based on client time zone settings – managed using services and logon trigger • Centralized DB • Site specific schemas • 1 global core schema for common functionalities
Key Challenges • Query performance • Adapt from 10g->11g optimizer + Exadata specific features • Significant effort to tune queries as legacy system had embedded SQL hints • Storage reduction • Identifying indexes to reduce – Several still needed to support OLTP queries • Testing with varying levels of compression models • Globalization using GMT • Date/time from sources distributed geographically stored in the DB • Geographically distributed users require data retrieved in local time zone. • Centralization • Large DB size – Manageability • Resource management – Thousands of adhoc users
-Contd • Data migration • Two approaches • AS-IS for certain existing data domains – Load from raw source files on empty schema • Complete re-architect – Incrementally load historical data from existing legacy systems • Data loading 24x7 – does not follow conventional batch loading with DW • Application cut-over • Phased approach
Support • Single vendor support • One number to call for all components. • No triage time – single vendor for servers, storage, networking, OS, DB • Platinum Services • Platinum Gateway • Quarterly Rolling patching • Automated Service Requests • Grid Control Monitoring • Single patch set • 40% savings in patching time • Single application validation - Reduced time for application validation