350 likes | 688 Views
Oracle on Clustered Data ONTAP. Enterprise Ecosystem. Agenda. Leverage benefits of Clustered Data ONTAP Architecture / Tiered Storage / Data Motion for volumes Oracle changes from Data ONTAP operating in 7-Mode to Clustered Data ONTAP ? Benefits with Clustered Data ONTAP
E N D
Oracle on Clustered Data ONTAP Enterprise Ecosystem
Agenda • Leverage benefits of Clustered Data ONTAP • Architecture / Tiered Storage / Data Motion for volumes • Oracle changes from Data ONTAP operating in 7-Mode to Clustered Data ONTAP ? • Benefits with Clustered Data ONTAP • General guidelines for Oracle on Clustered Data ONTAP • Use Cases for use of clustered Data ONTAP • What we have for 7-mode customers to take advantage of clustered Data ONTAP • Solutions Update • Integrations 2
Oracle on Data ONTAP operating in 7-Mode to Clustered Data ONTAP • The changes are minimal • The storage and database layout do not need to be changed, but you should review them to take advantage of clustered Data ONTAP functionality. • Use the same sizing paradigm; however, follow ONTAP guidelines for mixed environments. • For Clustered Data ONTAP operating system, take advantage of Clustered Data ONTAP technology so that your customer can use its features, especially for performing tasks nondisruptively. 3
The Obvious Benefits Nondisruptive upgrades of hardware Nondisruptive upgrades of software …and so on
Oracle on Clustered Data ONTAP Two principles of storage design in Clustered Data ONTAP: Compartmentalization of storage Compartmentalization of networking One question to answer: What is the maximum numberof nodes that might be neededfor a scaled-out workload?
What about Virtual Storage Servers? It’s not owned by a particular node or nodes. It’s not a process or application that is started and stopped. It is simply a logical representations of how the volumes and the network are plumbed together and managed. We still need to learn how customers will decide to use them. There is no best practice.
Virtual Storage Server – a few examples If a customer has 4 or 5 databases used for their own ERP solution, a single Virtual Storage Server is probably okay. If a customer has 4 or 5 databases managed by different teams with different security restrictions, one Virtual Storage Server per database may be warranted. If a customer is managing a cloud environment for multiple tenants and isolated environments, one Virtual Storage Server per customer would likely be best.
4 Node example: Database on NFS Configuration: • Distribute datafiles across 8 flexvols • Assign LIF’s by Client, needs or requirements. • When you need to scale out, migrate the flexvol. As well as the LIF, if needed.
Uninterrupted Access to Oracle Database users Clustered Data ONTAP Vol Move NFS, iSCSI , FC, FCoE • Continuous data access by clients and hosts Nondisruptively move volumes between ANY aggregates anywhere in the cluster R C A B C1 LUN LUN A1 A3 A2 B1 B2 On-Demand Flexibility Operational Efficiency Operational Lifecycle HA HA A B C Storage space savings, mirror relationships, and Snapshot copies are unchanged LUN A2 A1 C1 B2 LUN A3 B1 R
Load-balance Client Network Access IP address not permanently tied to network port Keep network load balanced across the cluster Oracle dNFS does the load balancing all by itself Assign new clients to least loaded IP addresses Rebalance addresses across nodes as load changes Average Network Load on the Node Network Demand on an IP Address
Cluster mode - Local Access 4 1 2 3 N-Blade NFS, Network, M-Host Conversion to SpinNP VLDB/Name Lookup N-Blade NFS, Network, M-Host Conversion to SpinNP VLDB/Name Lookup Memory Bus Memory Bus Gigabit Cluster interconnect D-Blade Network Stack, WAFL, Caching Mirroring, Locking D-Blade Network stack, WAFL, Caching Mirroring, Locking Storage Storage • Data is accessed from the disks local to the Node • Minimum cluster traffic between nodes • Faster response time
Cluster Mode - Remote Access 5 2 3 4 1 N-Blade NFS, Network, M-Host Conversion to SpinNP VLDB/Name Lookup N-Blade NFS, Network, M-Host Conversion to SpinNP VLDB/Name Lookup Memory Bus Memory Bus Gigabit Cluster interconnect D-Blade Network Stack, WAFL, Caching Mirroring, Locking D-Blade Network stack, WAFL, Caching Mirroring, Locking Storage Storage • Data is accessed on the remote node over the cluster • Higher cluster traffic between nodes • Slower response time
General Guidelines • Be aware of the effect of accessing a FlexVol on one node using a LIF located on a different node. • There is nothing wrong with doing this, and most customers would not observe any problem. • If your customer has a latency-sensitive database, the additional millisecond in response time would be noticeable.
NFS considerations for Oracle using Clustered Data ONTAP Spread Oracle volumes across storage nodes Create interface groups as needed Isolate database traffic on a different virtual LAN or subnets. Use 10-GbE infrastructure, enable jumbo frames for kernel NFS & Oracle Direct NFS (dNFS). Consider the number of LIFs based on the Oracle environment NOTE: Future Capabilities include support for parallel NFS (pNFS) for Oracle databases in Oracle 12cR2.
LUNS LIF migration is unnecessary ALUA will update the connected hosts with new path data
Multipath SAN Access with Vol Move MPIO ALUA • ALUA path state changes for LUNs • Paths to a volume’s new physical location are active/optimized A • ALUA sends a path update SCSI command down moved LUN paths A1 ActiveOptimized ActiveOptimized ActiveNonoptimized ActiveUnoptimized ActiveUnoptimized ActiveUnoptimized
Optimized (Direct) Path to each LUN MPIO ALUA A B A1 B2 ActiveDirect ActiveIndirect ActiveIndirect ActiveIndirect ActiveDirect
SAN considerations for Oracle using Clustered Data ONTAP Hosts are required to use multipath I/O to access LUNs ALUA is required in order to determine the state of a path. • Active/optimized: The LIF is configured on a port on a controller that also "owns" the aggregate on which the LUN is provisioned • Active/unoptimized: The LIF is configured on a port on a controller that does not "own" the aggregate on which the LUN is provisioned Any SCSI LIF can accept SCSI commands for a LUN on its Virtual Storage Server, regardless of which node’s aggregate in the cluster the LUN is provisioned upon Consider the number of LIFs based on the Oracle environment
clustered Data ONTAP 8.2 updates ONTAP 8.2 provides SnapVault. Previous releases did not have SnapVault available. CG-snapshots are available in clustered Data ONTAP 8.2. • This not required for databases using the NFS protocol. • Allows support for block-based protocols for Snap Manager for Oracle with Snap Drive for Unix.
Export Policies • Although qtrees exist in Clustered Data ONTAP, NFS export permissions are set at the volume level • This means, all qtrees in a volume must be exported with the same permissions • With 7-mode, many customers would use qtree with different permissions. • For example, a single flexvol could host the ORACLE_HOME directories for 10 different servers NOTE: This approach cannot be done with clustered ONTAP unless all 10 qtrees are exported to all 10 servers with identical permissions.
Key Takeaways A Virtual Storage Server is a just a set of rules about volume and network plumbing. When designing for Clustered Data ONTAP, ask yourself: • “how many nodes will I want someday ?” Understand that we do not fully understand how customers will choose to leverage Clustered Data ONTAP.
Oracle on NetApp Solution v2Storage Migration and database Consolidation (UC01)
Oracle on NetApp Solution v2Dynamic Scalability with online Scale-Up & Scale-Out (UC02)
Oracle on NetApp Solution v2Seamless Failover Protection (UC03)
Oracle on Clustered Data ONTAP Solution Collateral • Solution Guide • http://media.netapp.com/documents/tr-3979.PDF • Oracle NVA (Updated to 8.1.1) • http://www.netapp.com/us/media/nva-0002.pdf • Oracle 11gR2 Performance using Clustered Data ONTAP 8.1 • http://media.netapp.com/documents/tr-3961.pdf 30