390 likes | 521 Views
Section 3 : Business Continuity. Lecture 29. Replication technologies: Local replicas, Technologies, Restore and Restart. Chapter Objective. After completing this chapter you will be able to: Discuss local replication and the possible uses of local replicas
E N D
Section 3 : Business Continuity Lecture 29 Replication technologies: Local replicas, Technologies, Restore and Restart
Chapter Objective After completing this chapter you will be able to: • Discuss local replication and the possible uses of local replicas • Explain consistency considerations when replicating file systems and databases • Discuss host and array based replication technologies • Functionality • Differences • Considerations • Selecting the appropriate technology
Lesson: Local Replica and Data Consistency Upon completion of this lesson, you will be able to: • Define local replication • Discuss the possible uses of local replicas • Explain replica considerations such as Recoverability and Consistency • Describe how consistency is ensured in file system and database replication • Explain Dependent write principle
What is Replication • Replica - An exact copy • Replication - The process of reproducing data • Local replication - Replicating data within the same array or the same data center REPLICATION Source Replica (Target)
Possible Uses of Local Replicas • Alternate source for backup • An alternative to doing backup on production volumes • Fast recovery • Provide minimal RTO (recovery time objective) • Decision support • Use replicas to run decision support operations such as creating a report • Reduce burden on production volumes • Testing platform • To test critical business data or applications • Data Migration • Use replicas to do data migration instead of production volumes
Replication Considerations • Types of Replica: choice of replica tie back into RPO (recovery point objective) • Point-in-Time (PIT) • non zero RPO • Continuous • near zero RPO • What makes a replica good • Recoverability/Re-startability • Replica should be able to restore data on the source device • Restart business operation from replica • Consistency • Ensuring consistency is primary requirement for all the replication technologies
Understanding Consistency • Ensure data buffered in the host is properly captured on the disk when replica is created • Data is buffered in the host before written to disk • Consistency is required to ensure the usability of replica • Consistency can be achieved in various ways: • For file Systems • Offline: Un-mount file system • Online: Flush host buffers • For Databases • Offline: Shutdown database • Online: Database in hot backup mode • Dependent Write I/O Principle • By Holding I/Os
File System Consistency: Flushing Host Buffer Application File System Data Sync Memory Buffers Daemon Logical Volume Manager Physical Disk Driver Source Replica
Database Consistency: Dependent write I/O Principle • Dependent Write: A write I/O that will not be issued by an application until a prior related write I/O has completed • A logical dependency, not a time dependency • Inherent in all Database Management Systems (DBMS) • e.g. Page (data) write is dependent write I/O based on a successful log write • Necessary for protection against local outages • Power failures create a dependent write consistent image • A Restart transforms the dependent write consistent to transitionally consistent • i.e. Committed transactions will be recovered, in-flight transactions will be discarded
Database Consistency: Dependent Write I/O Source Replica Source Replica 1 1 1 2 2 2 3 3 3 3 4 4 4 4 D C C Consistent Inconsistent
Database Consistency: Holding I/O Replica Source 1 1 5 2 2 5 3 3 4 4 Consistent
Lesson Summary Key points covered in this lesson: • Possible uses of local replicas • Alternate source for backup • Fast recovery • Decision support • Testing platform • Data Migration • Recoverability and Consistency • File system and database replication consistency • Dependent write I/O principle
Lesson: Local Replication Technologies Upon completion of this lesson, you will be able to: • Discuss Host and Array based local replication technologies • Options • Operation • Comparison
Local Replication Technologies • Host based • Logical Volume Manager (LVM) based replication (LVM mirroring) • File System Snapshot • Storage Array based • Full volume mirroring • Pointer based full volume replication • Pointer based virtual replication
Host Based Replication: LVM-based Replication PhysicalVolume 1 Host Logical Volume Logical Volume PhysicalVolume 2
LVM-based Replication: Limitations • LVM based replicas add overhead on host CPUs • Each write is translated into two writes on the disk • Can degrade application performance • If host volumes are already storage array LUNs then the added redundancy provided by LVM mirroring is unnecessary • The devices will have some RAID protection already • Both replica and source are stored within the same volume group • Replica cannot be accessed by another host • If server fails, both source and replica would be unavailable • Keeping track of changes on the mirrors is a challenge
File System Snapshot • Pointer-based replica • Uses Copy on First Write principle • Uses bitmap and block map • Bitmap: Used to track blocks that have changed on the production/source FS after creation of snap – initially all zero • Block map: Used to indicate block address from which data is to be read when the data is accessed from the Snap FS – initially points to production/source FS • Requires a fraction of the space used by the original FS • Implemented by either FS itself or by LVM
File System Snapshots – How it Works • Write to Production FS Snap FS Metadata BLK Bit Prod FS 1-0 1-0 Metadata 2-0 2-0 1 Data a 3-2 3-0 3-0 3-1 2 Data b 4-1 4-0 4-0 4-1 c 3 Data C New writes 1 Nodata D d 4 Data 1 no data 1 Data d 2 Data c 2 no data 3 no data N Data N 4 no data
File System Snapshots – How it Works • Reads from snap FS • Consult the bitmap • If 0 then direct read to the production FS • If 1 then go to the block map get the block address and read data from that address Snap FS Metadata Prod FS BLK Bit Metadata 1-0 1-0 1 Data a 2-0 2-0 2 Data b 3-1 3-2 4-1 3 Data C 4-1 4 Data D 1 Data d 1 Nodata 2 Data c 3 no data N Data N 4 no data
Source Replica Storage Array Based Local Replication • Replication performed by the Array Operating Environment • Replicas are on the same array • Types of array based replication • Full-volume mirroring • Pointer-based full-volume replication • Pointer-based virtual replication Array Production Server BC Server
Full Volume Mirroring: Attached • Target is a full physical copy of the source device • Target is attached to the source and data from source is copied to the target • Target is unavailable while it is attached • Target device is as large as the source device • Good for full backup, decision support, development, testing and restore to last PIT Attached Read/Write Not Ready Source Target Array
Full Volume Mirroring: Detached • After synchronization, target can be detached from the source and made available for BC (business continuity) operations • PIT is determined by the time of detachment • After detachment, re-synchronization can be incremental • Only updated blocks are resynchronized • Modified blocks are tracked using bitmaps Detached - PIT Read/Write Read/Write Source Target Array
Full Volume Mirroring: Source and Target Relationship Attached/Synchronization Detached Resynchronization Source ≠ Target Source = Target Source = Target
Pointer-based Full Volume Replication • Provide full copy of source data on the target • Target device is made accessible for business operation as soon as the replication session is started • Point-in-Time is determined by time of session activation • Two modes • Copy on First Access (deferred) • Full Copy mode • Target device is at least as large as the source device
Copy on First Access (CoFA) Mode: Deferred Mode Write to Source Read/Write Read/Write Source Target Write to Target Read/Write Read/Write Source Target Read from Target Read/Write Read/Write Source Target
Full Copy Mode • On session start, the entire contents of the Source device is copied to the Target device in the background • If the replication session is terminated, the target will contain all the original data from the source at the PIT of activation • Target can be used for restore and recovery • In CoFA mode, the target will only have data was accessed until termination, and therefore it cannot be used for restore and recovery • Most vendor implementations provide the ability to track changes: • Made to the Source or Target • Enables incremental re-synchronization
Pointer Based Virtual Replication • Targets do not hold actual data, but hold pointers to where the data is located • Target requires a small fraction of the size of the source volumes • A replication session is setup between source and target devices • Target devices are accessible immediately when the session is started • At the start of the session the target device holds pointers to data on source device • Typically recommended if the changes to the source are less than 30%
Virtual Replication: Copy on First Write Example TargetVirtual Device Source Save Location
Tracking Changes to Source and Target • Changes will/can occur to the Source/Target devices after PIT has been created • How and at what level of granularity should this be tracked • Too expensive to track changes at a bit by bit level • Would require an equivalent amount of storage to keep track • Based on the vendor some level of granularity is chosen and a bit map is created (one for source and one for target) • For example one could choose 32 KB as the granularity • If any change is made to any bit on one 32KB chunk the whole chunk is flagged as changed in the bit map • For 1GB device, map would only take up 32768/8/1024 = 4KB space
Tracking Changes to Source and Target: Bitmap Source 0 0 0 0 0 0 0 0 At PIT Target 0 0 0 0 0 0 0 0 Source 1 0 0 1 0 1 0 0 After PIT… Target 0 0 1 1 0 0 0 1 For resynchronization/restore 1 0 1 1 0 1 0 1 Logical OR 0 = unchanged 1 = changed
Restore/Restart Operation • Source has a failure • Logical Corruption • Physical failure of source devices • Failure of Production server • Solution • Restore data from target to source • The restore would typically be done incrementally • Applications can be restarted even before synchronization is complete -----OR------ • Start production on target • Resolve issues with source while continuing operations on target • After issue resolution restore latest data on target to source
Restore/Restart Considerations • Before a Restore • Stop all access to the Source and Target devices • Identify target to be used for restore • Based on RPO and Data Consistency • Perform Restore • Before starting production on target • Stop all access to the Source and Target devices • Identify Target to be used for restart • Based on RPO and Data Consistency • Create a “Gold” copy of Target • As a precaution against further failures • Start production on Target
Restore/Restart Considerations (cont.) • Pointer-based full volume replicas • Restores can be performed to either the original source device or to any other device of like size • Restores to the original source could be incremental in nature • Restore to a new device would involve a full synchronization • Pointer-based virtual replicas • Restores can be performed to the original source or to any other device of like size as long as the original source device is healthy • Target only has pointers • Pointers to source for data that has not been written to after PIT • Pointers to the “save” location for data was written after PIT • Thus to perform a restore to an alternate volume the source must be healthy to access data that has not yet been copied over to the target
Creating Multiple Replicas Target Devices 06:00 A.M. Source 12:00 P.M. Point-In-Time 06:00 P.M. 12:00 A.M. : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : 12 : 01 : 02 : 03 : 04 : 05 : 06 : 07 : 08 : 09 : 10 : 11 : A.M. P.M.
Local Replication Management: Array Based • Replication management software residing on storage array • Provides an interface for easy and reliable replication management • Two types of interface: • CLI • GUI
Lesson Summary Key points covered in this lesson: • Replication technologies • Host based • LVM based mirroring • File system snapshot • Array based • Full volume mirroring • Pointer-based full volume copy • Pointer-based virtual replica
Chapter Summary Additional Task Research on EMC Replication Products Key points covered in this chapter: • Definition and possible use of local replicas • Consistency considerations when replicating file systems and databases • Host based replication • LVM based mirroring, File System Snapshot • Storage array based replication • Full volume mirroring, Pointer based full volume and virtual replication • Choice of technology
Check Your Knowledge • Describe the uses of a local replica in various business operations. • How can consistency be ensured when replicating a database? • What are the differences among full volume mirroring and pointer based replicas? • What is the key difference between full copy mode and deferred mode? • What are the considerations when performing restore operations for each array replication technology?