80 likes | 94 Views
Learn how PureApp Systems ensure fault tolerance and rapid disaster recovery through redundancy at software and hardware levels. Discover the setup for multi-chassis, multi-site DR and the elimination of single points of failure.
E N D
High Availability and Disaster Recovery in PureApp Systemssherwood yao (syao@us.ibm.com)July 3, 2012
#1 popular question from PureApp System’s customers prospects: • “How do you set up PureApp system for high availability?” • “What do you do for Disaster Recovery?”
HA and DR: Fault tolerance at the different grade of shade • High Availability: apps continue to run tolerating any SW/HW level component failure. The most frequent implementation of HA is through redundancy. There are two levels of HA • Software HA: Redundancy at the software tier: multiple copies of the same app and its middleware include WAS and DB2) running in parallel. • Hardware HA: Redundancy at the hardware tier include redundant networking, redundant compute and redundant storage. • Disaster Recovery:get business critical app up and running in a matter of minutes (or at most hours) from a "last good state" of an existing system in the case an entire data center is lost. (loss of “some” data is expected”) • Always setup as multi-chassis, multi-site DR, usually set up through Active/Passive HA • The key objective is to remove the single point of failure within the whole system
Marginal ROI Cost/complexity PureApp system HA/DR scorecard: pick the level that fits the goal
Cable ingress / egress 42 U BNT 64 PT Enet SW 41 U BNT 64 PT Enet SW 40 U 39 U V 7000 Expansion 38 U 37 U V 7000 Controller 36 U 35 U V 7000 Expansion 34 U 33 U V 7000 Controller 32 U 31 U Compute Node 30 U Compute Node 29 U 28 U Compute Node 27 U Compute Node U U 26 U D D P Compute Node P 25 U Compute Node 24 U 23 U Compute Node 22 U 21 U Compute Node 20 U Compute Node 19 U 18 U Compute Node 17 U Compute Node 16 U Compute Node 15 U Compute Node 14 U 13 U Mgmt Mgmt 12 U U U 11 U D D P Compute Node P 10 U Compute Node 9 U 8 U Compute Node 7 U Compute Node 6 U Compute Node 5 U Compute Node 4 U 3 U Mgmt Mgmt 2 U 1 U Cable ingress / egress PureAS HA DR explained: it has built-in single Rack HA (period) 2 Storage Controller V7000 Disk System • What is built-in HA/DR in V1: • Hardware • All HW components have built in redundancy, include Power supply • 3 fully isolated standalone ITE chassis can operate independently to each other. • Software: • Virtual System Pattern provides DB2 HADR pattern and WAS HA pattern (include WAS Network Deployment and HA web server) • Intelligent workload placement algorithm automatically distribute workload on to isolated compute nodes. 3 Flex chasis, each w/ 10Gb Ethernet Switch 16 Gb FC Switch WAS DB2 WAS DB2 DB2 WAS WAS DB2 • There is NO single point of failure within the whole PureApp system 2 IWD mgmt
Cluster Member Cluster Member Cluster Member Cluster Member IHS IHS DB2 HADR (Primary) DB2 HADR (Secondary) PureAS HA DR explained: You can build multi-Rack HA manually External Load Balancer • If a customer must have multi-rack HA, • the answer is still YES, but this requires additional setup and maintenance work. • If these two racks are collocated within the same data center, customer can set up single cell configuration, it yields the least additional operational overheads. • Otherwise, customer can set two active-active WAS cells on IPAS system with VSP: export from IPAS A and import to IPAS B, this setup requires more maintenance work to keep A and B in sync. IPAS A IPAS B DMgr Figure 1: HA Configuration inside a DC with a single shared cell
Summary • When your ISV asked you about PureAS HA/DR. You should • Ask them what is their business SLA and decide whether single rack SLA is adequate to deliver their use case. • Tell them PureAS supports HA out of the box on a single rack basis. Multi-rack HA is supported via manual configuration (less integrated expertise here, but still works) • Think Virtual System Pattern as your friend when customer demand HA/DR