350 likes | 558 Views
Optimizing Sun ZFS Storage Appliances for Oracle DB . 江岱祥 系统事业部 高级销售顾问.
E N D
Optimizing Sun ZFS Storage Appliances for Oracle DB 江岱祥 系统事业部 高级销售顾问
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
Program Agenda • Introduction to NFS • Performance Measurement Best Practices • Oracle Direct NFS with Oracle Storage • Oracle Direct NFS and HCC with Oracle Storage • Oracle Remote Direct Memory Access (RDMA)
Oracle Storage Strategy Best Standalone Products Better When Integrated
Introduction to NFS • NFS means Network File System • Originally developed by Sun Microsystems in 1984, built on RPC • NFS is a distributed file system protocol • NFS is an open standard, anyone can implement it • Secure data access anywhere in the enterprise • Commonly implemented with TCP over IP, can use RPC/RDMA • Thanks to Moore’s Law • Block and file protocol is a matter of convenience, not performance
NFS – Good at Everything • Support data access for any workload by any application • Runs on nearly any hardware built since 1984 • Data services for simple applications • Caching • Read ahead • Write behind
NFS for the Oracle Database • Database files • Control • Data • Online Redo • Recovery files • Archived redo • Backup sets • Image copies NFS on Sun ZFS Storage Appliance
Implementing Oracle with NFS • Get the mount options right • Get the instance tuning right • Max out the number of concurrent RPC requests • Open up the TCP buffer sizes to at least 4MB • Details on MOS • Document ID 1354980.1 Sun Storage 7000 Unified Storage System FAQ • Document ID 359515.1 Mount Options for Oracle files with NFS
Oracle NFS Block Diagram Sun ZFS Storage Oracle Server Oracle Instance I/O Request ZFS File System NFS Client Execute I/O NFS Server RPC RPC Establish Connection Ensure Delivery TCP TCP IP NIC IP Network Link NIC
Oracle NFS Instrumentation Sun ZFS Storage Oracle Server Oracle Instance AWR / Analytics ZFS File System NFS Client NFS Server nfsiosat / Analytics RPC RPC mountstat / mdb netstat TCP TCP IP NIC IP sar / Analytics NIC
Performance Measurement Best PracticesCompare the Results and Find the Limit Oracle Instance Operating System Limit? Storage System
Common NFS Design Limitations • NFS and TCP may need tuning for the specific hardware stack • Inefficient data transport, implementation trouble with multi-link systems • Operating system RPC stack not designed to queue many I/O • Connection bottleneck limits data sent to TCP layer • Inefficient transport of I/O from Oracle instance to NFS client • Kernel buffer copy burns CPU and CPU interconnect bandwidth • Careful configuration required for optimal performance • Harder to deploy, expensive to maintain, error-prone
Oracle Direct NFS – Integrated NFS Client Tuned NFS and TCP Scalable RPC Optimized for Oracle Optimized I/O transfer Horizontal network scaling
Oracle Direct NFS Block Diagram Sun ZFS Storage Oracle Server Oracle Instance I/O Request ZFS File System Direct NFS Client Execute I/O NFS Server RPC RPC Establish Connection Ensure Delivery TCP TCP IP NIC IP Network Link NIC
Implementation Details with Oracle Direct NFS • Link the Oracle Direct NFS library in the database home • Make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mkdnfs_on • Configure the oranfstab file • Client and NFS server mount points, IP address paths, and routing • Complete examples in MOS Document ID 1354980.1 • Tune Direct NFS I/O queuing • dnfs_batch_size initialization parameter • Download the patch for bug 13647945 for your database software
Sun ZFS Storage Appliance Network Configuration with Oracle Direct NFS • Increase NFS server threads from 500 to 1000 • Use link aggregation (LACP) if your switch supports it • Use IPMP if you don’t use LACP • Configure 1 IP address for every physical link in the IPMP group • Configure adaptive routing • Use the largest datagram you can support • Jumbo frames with 10GbE • Connected mode with InfiniBand
Gerhard Kuppler Senior Director Corporate SAP Account, Oracle Corporation “SAP customers can benefit from [Oracle] Direct NFS in the following way…Superior to any bonding solution…Better throughput than most SAN solutions.”
Oracle Direct NFS Proof Point: 8kB Reads • Linux Kernel NFS • Software bottlenecks • RPC/CPU • 25k IOPS limit • Oracle Direct NFS • Hardware bottlenecks • 145k IOPS = 10GbE • 225k IOPS nearly 2x10GbE Throughput (IO/s)
Oracle Direct NFS Proof Point: 1 MB Reads • Linux Kernel NFS • Software bottlenecks • 680 MBPS limit • Direct NFS • Hardware bottlenecks • 1100 MBPS = 10GbE • 2000 MBPS ~ 2x10GbE Throughput (MB/s)
Oracle Direct NFS Proof Point: 1 MB Writes • Linux Kernel NFS • Software bottlenecks • 710 MBPS limit • Direct NFS • Hardware bottlenecks • 1100 MBPS = 10GbE • 1800 MBPS ~ 2x10GbE Throughput (MB/s)
Oracle Direct NFS Proof Point: OLTP Processing • 3.2x more throughput at the same response time • 4.3x faster response time at the same throughput • 2x better CPU efficiency • Hit application-level bottleneck
Improving Again – Data Efficiency • Moore’s Law makes CPU relatively cheap • Two other more expensive data center consumables • Network bandwidth • Storage space • Take advantage of CPU resources to save bandwidth and space
Oracle Direct NFS withHybrid Columnar Compression (HCC) Understand the data Compress the columns Optimized for Oracle Save network bandwidth Increase storage efficiency
Oracle Direct NFS Block Diagram Sun ZFS Storage Oracle Server Oracle Instance I/O Request ZFS File System Direct NFS Client Execute I/O NFS Server RPC RPC Establish Connection Ensure Delivery TCP TCP IP NIC IP Network Link NIC
Throughput with Oracle Direct NFS and HCC • No compression • 1GbE bottleneck • Baseline • HCC Query Low • 1GbE bottleneck • 12x more throughput • HCC Archive High • CPU bottleneck • 7.5x more throughput 280 MR/min Throughput (millions of rows/min) 12x 7.5x
Remote Direct Memory Access (RDMA) Zero-copy networking Low-latency transfer Optimized for Oracle High bandwidth network Increased CPU efficiency
NFS with RDMA Block Diagram Sun ZFS Storage Oracle Server Oracle Instance I/O Request ZFS File System NFS Client Execute I/O NFS Server RPC RPC Establish Connection NIC Network Link NIC
RDMA for RMAN Backup 1520 MB/s • CPU constrained system • 2.3x increase in backup throughput • Gain due to CPU efficiency at processing the I/O request 2.3x Throughput (MB/s) 670 MB/s
RDMA for OLTP • Read response time constrained system • 20% improvement in I/O response time • 38% increase in transaction throughput DB File Sequential Read (ms) Transaction Rate (Database Transactions/sec)
Summary Oracle Database Oracle Storage Maximum Application Performance
Learn More at Oracle Open World • Automating Storage Management for Oracle Database • Consolidating SAN Storage using Pillar Axiom • Tiered Storage for Archiving with Oracle Storage and Storage Archive Manager Software • Oracle Engineered Systems Backup with Sun ZFS Backup Appliance • Oracle NAS & SAN Storage for Oracle Virtual Desktop Environments