490 likes | 752 Views
NEW: Oracle Real Application Clusters (RAC) and Oracle Clusterware 11g Release 2. Markus Michalewicz Product Manager Oracle Clusterware.
E N D
NEW: Oracle Real Application Clusters (RAC) and Oracle Clusterware 11g Release 2 Markus Michalewicz Product Manager Oracle Clusterware
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
Agenda <Insert Picture Here> • Overview • Easier Installation • SSH Setup, prerequisite checks, and FixUp-scripts • Automatic cluster time synchronization configuration • OCR & Voting Files can be stored in Oracle ASM • Easier Management • Policy-based and Role-separated Cluster Management • Oracle EM-based Resource and Cluster Management • Grid Plug and Play (GPnP) and Grid Naming Service • Single Client Access Name (SCAN) • Summary
The Traditional Data CenterExpensive and Inefficient • Dedicated silos are inefficient • Sized for peak load • Constrained performance • Difficult to scale • Expensive to manage Dedicated Stacks
A virtualized single instance database Delivers value of server virtualization to databases on physical servers Server consolidation Online upgrade to RAC Standardized deployment across all Oracle databases Built-in cluster failover for high availability Live migration of instances across servers Rolling patches for single instance databases Oracle RAC One NodeBetter Virtualization for Databases
ASM Cluster File System FileSystem Oracle Grid Infrastructure Siebel PSFT Binaries OCR &Voting Files DB Datafiles RAC DB1 RAC DB2 RAC One FREE Oracle Grid InfrastructureThe universal grid foundation • Standardize infrastructure software • Eliminates the need for 3rd-party solutions • Combines Oracle Automatic Storage Management (ASM) & Oracle Clusterware • Typically used by System Administrators • Includes: • Oracle ASM • ASM Cluster File System (ACFS) • ACFS Snapshots • Oracle Clusterware • Cluster Health Manager
Low cost servers • Consolidation Servers CAPEX Storage • Fewer disks/LUNs Software • Full Oracle Stack Data Center • Consolidation OPEX • Easy Provisioning • Easy Management Admin Oracle Database 11g Release 2 Lowering CapEx and OpEx using Oracle RAC Oracle RAC Oracle ASM Oracle Grid Infrastructure
New intelligent installer 40% fewer steps to install Oracle Real Application Clusters and Oracle Grid Infra. Integrated Validation and Automation Nodes can be easily repurposed Nodes can be dynamically added or removed from the cluster Network and storage information are read from profile and configured automatically No need to manually prepare a node Oracle Grid Infrastructure Siebel PSFT RAC DB1 RAC DB2 Data Center • Consolidation RAC One FREE OPEX • Easy Provisioning • Easy Management Admin Oracle Database 11g Release 2 Easier Grid Installation and Provisioning
Easier Grid Installation • Typical and Advanced Installation • Software Only Installation for Oracle Grid Infrastructure • Grid Naming Service (GNS) and Auto assignment of VIPs • SSH Setup, prerequisite checks, and FixUp-scripts • Automatic cluster time synchronization configuration • OCR & Voting Files can be stored in Oracle ASM 1 2 3 4 5 6
Oracle Grid Infrastructure Automatic Cluster Time SynchronizationOracle Cluster Time Syncronization Service (CTSS) • Time synchronization between cluster nodes is crucial • Typically, a central time server, accessed by NTP, is used to synchronize the time in the data center • Oracle provides the Oracle CTSS as an alternative for cluster time synchronization • CTSS runs in 2 ways: • Observer mode: whenever NTP is installed on the system, CTSS only observes • Active mode: time in cluster is synchronized against the CTSS master (node)
The OCR Managed in Oracle ASM • The OCR is managed like a datafile in ASM (new type) • It adheres completely to the redundancy settings for the DG
Voting Files Managed in Oracle ASM • Unlike the OCR, Voting Files are • Stored on distinguished ASM disks • ASM auto creates 1/3/5 Voting Files • Based on Ext/Normal/High redundancy and on Failure Groups in the Disk Group • Per default there is one failure group per disk • ASM will enforce the required number of disks • New failure group type: Quorum Failgroup [GRID]> crsctl query css votedisk 1. 2 1212f9d6e85c4ff7bf80cc9e3f533cc1 (/dev/sdd5) [DATA] 2. 2 aafab95f9ef84f03bf6e26adc2a3b0e8 (/dev/sde5) [DATA] 3. 2 28dd4128f4a74f73bf8653dabd88c737 (/dev/sdd6) [DATA] Located 3 voting disk(s).
Oracle Enterprise Manager (EM) is able to manage the full stack, including Oracle Clusterware Manage and monitor clusterware components Manage and monitor application resources New Grid Concepts: Server Pools Grid Plug and Play (GPnP) Grid Naming Service (GNS) Auto-Virtual IP assignment Single Client Access Name (SCAN) Data Center • Consolidation OPEX • Easy Provisioning • Easy Management Admin Oracle Database 11g Release 2 Easier Grid Management
Easier Grid Management • OCR & Voting Files can be stored in Oracle ASM • Clusterized Commands • Policy-based and Role-separated Cluster Management • Oracle EM-based Resource and Cluster Management • Grid Plug and Play (GPnP) and Grid Naming Service • Single Client Access Name (SCAN) 1 2 3 4 5 6
Oracle Grid Infrastructure Siebel PSFT RAC DB1 Oracle RAC DBs RAC DB2 RAC One FREE New Grid Concept: Server Pools Foundation for a Dynamic Cluster Partitioning • Logical division of a cluster into pools of servers. • Hosts applications (which could be databases or applications) Why Use Server Pools? • Easy allocation of resources to workload • Easy management of Oracle RAC • Just define instance requirements (# of nodes – no fixed assignment) • Facilitates Consolidation of Applications and Databases on Clusters
Oracle Grid Infrastructure Siebel PSFT RAC One RAC DB1 RAC DB2 FREE Policy-based Cluster Management Ensure Isolation based on Server Pools • Policy-based management uses server pools to • Enable dynamic capacity assignment when needed • Ensure isolation where necessary (“dedicated servers in a cluster”) • In order to guarantee: • Applications get the required minimum resources (whenever possible) • Applications do not “take” resources from more important applications Resource management without policies
Oracle Grid Infrastructure Siebel PSFT RAC DB1 Oracle RAC DBs RAC DB2 RAC One FREE Resource management with policies Policy-based Cluster Management Ensure Isolation based on Server Pools • Policy-based management uses server pools to • Enable dynamic capacity assignment when needed • Ensure isolation where necessary (“dedicated servers in a cluster”) • In order to guarantee: • Applications get the required minimum resources (whenever possible) • Applications do not “take” resources from more important applications Resource management without policies
Oracle Grid Infrastructure Siebel PSFT RAC DB1 Oracle RAC DBs RAC DB2 RAC One FREE Enable Policy-based Cluster Management Define Server Pools using the appropriate Definition • A Server Pool is defined by 4 attributes: • Server Pool Name • Min – specifies the “minimum” number of servers that should run in the server pool • Max – states the “maximum” number of servers that can run in the server pool. • Imp – “importance” specifies the relative importance between server pools. This parameter is of relevance at the time of the server assignment to server pools or when servers need to be re-shuffled in the cluster due to failures.
Oracle Grid Infrastructure Adm1 DBA2 DBA1 Siebel RAC DB1 PSFT RAC DB2 Grid User OS User Oracle RAC DBs DBAn User Role-separated Cluster Management • Addresses organizations with strict separation of duty • Role-separated management is implemented in 2 ways: • Vertically: Use a different user (groups) for each layer in the stack • Horizontally: ACLs on server pools for policy-managed DBs / Apps. • The default installation assumes no separation of duty
GPnP eliminates the need for a per node configuration It is an underlying grid concept that enables the automation of operations in the cluster Allows nodes to be dynamically added or removed from the cluster Provides an easier management to build large clusters It is the basis for the Grid Naming Service (GNS) Technically, GPnP is based on an XML profile Defining node personality (e.g. cluster name, network classification) Created during installation Updated with every relevant change (using oifcfg, crsctl) Stored in local files per home and in the OCR Wallet protected GPnP is apparent in things that you do not see and that you are not asked for (anymore). OUI does not ask for a private node name anymore. Grid Plug and Play (GPnP) Foundation for a Dynamic Cluster Management
mycluster.myco.com Oracle Grid Infrastructure Siebel PSFT RAC DB1 RAC DB2 RAC One FREE Grid Naming Service (GNS) Dynamic Virtual IP and Naming • The Grid Naming Service (GNS) allows dynamic name resolution in the cluster • The Cluster manages its own virtual IPs • Removes hard coded node information • No VIPs need to be requested, if cluster changes • Enables nodes to be dynamically added or removed from the cluster • Defined in the DNS as a “delegated domain” • Mycluster.myco.com • DHCP provides IPs inside delegated domain
Grid Naming Service (GNS) Steps to set up GNS • Benefit: Reduced configuration for VIPs in the cluster • Defined in the DNS as a “delegated domain” • DNS delegates request to mycluster.myco.com to GNS • Needs its own IP address (the GNS VIP) • This is the only “NAME IP” assignment required in DNS • All other VIPs, and SCAN-VIPs are defined in the GNS for a cluster • DHCP is used for dynamic IP assignment • Optional way of resolving addresses • Requires novel configuration by DNS administrator
DNS client SCAN listeners Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain dynamic VIP assignment
DNS client 1 SCAN listeners Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain dynamic VIP assignment
DNS client 1 SCAN listeners Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain 2 dynamic VIP assignment
DNS client 1 SCAN listeners Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain 2 3 dynamic VIP assignment
DNS client SCAN listeners 3 Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain 1 4 2 dynamic VIP assignment
DNS client 4 SCAN listeners 3 Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect 5 delegated cluster domain corporate domain 1 2 dynamic VIP assignment
5 DNS client 4 SCAN listeners 3 Local listeners GNS DHCP server Oracle RAC cluster - GRIDA Grid Naming Service Client Connect delegated cluster domain corporate domain 1 6 2 dynamic VIP assignment
Used by clients to connect to any database in the cluster Removes the requirement to change the client connection if cluster changes Load balances across the instances providing a service Provides failover between “moved instances” Oracle Grid Infrastructure Siebel PSFT RAC DB1 ClusterSCANname RAC DB2 RAC One FREE Single Client Access Name (SCAN) The New Database Cluster Alias
Single Client Access Name Network Configuration for SCAN • Requires a DNS entry or GNS to be used • In DNS, SCAN is a single name defined to resolve to 3 IP-addresses: • Each cluster will have 3 SCAN-Listeners,combined with a SCAN-VIP defined as cluster resources • The SCAN VIP/LISTENER combination will failover to another node in the cluster, if the current node fails clusterSCANname.example.com IN A 133.22.67.194 IN A 133.22.67.193 IN A 133.22.67.192 Cluster Resources--------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node1 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node2 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node3
Single Client Access Name Easier Client Configuration • Without SCAN (pre-11g Rel. 2) TNSNAMES has 1 entry per node • With every cluster change, all client TNSNAMES need to be changed PMRAC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521)) … (ADDRESS = (PROTOCOL = TCP)(HOST = nodeN)(PORT = 1521)) (CONNECT_DATA = … )) • With SCAN only 1 entry per cluster is used, regardless of the # of nodes: PMRAC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = clusterSCANname)(PORT = 1521)) (CONNECT_DATA = … ))
Oracle RAC Database Cluster Local Listeners SCAN Listeners Clients Connection Load Balancing using SCAN Application Server
Oracle RAC Database Cluster Local Listeners SCAN Listeners Clients Connection Load Balancing using SCAN Application Server
RAC RAC RAC RAC RAC RAC EM RAC RAC RAC RAC RAC RAC RAC EM EM The Evolution of the Grid Lowering the Cost of Database Deployments Datacenter Grid GRID for DB • Lower the cost of deployments • Lower CAPEX • Lower OPEX Oracle RAC for Scale out • Lower the infrastructure costs • Improved utilization • Storage consolidation • Management efficiency (Shared DB) Oracle RAC for HA • Lower the cost scalability EM AS AS AS AS AS AS AS • Lower the cost of HA EM RAC RAC RAC RAC RAC RAC RAC RAC RAC RAC RAC RAC RAC RAC StandardizedInfrastructure Shared Infrastructure Shared Cluster Shared DatabaseShared Storage Dedicated Infrastructure
<Insert Picture Here> Questions and Answers