220 likes | 357 Views
Transactional Recovery and Checkpoints Chap. 10.6-10.8. Recovery is needed. Reading - involves buffers, paging and LRU replacement (operating system) Writing - when to write updates to disk? When LRU replaces (use dirty bit)? no immediate durability no atomicity
E N D
Transactional Recovery and Checkpoints Chap. 10.6-10.8
Recovery is needed • Reading - involves buffers, paging and LRU replacement (operating system) • Writing - when to write updates to disk? When LRU replaces (use dirty bit)? • no immediate durability • no atomicity • But what if crash and still in memory? • Have a log buffer with log entries and a log file
Log Buffer and Files • Write log entry from log buffer, write changes to disk (log file) at intervals • log file used to perform recovery • log buffer and log file guarantees atomicity and durability
How? • log entries: W, T#, data_item, before_value, after_value or RID, list of cols with before and after values • also insert and delete tuple log entries • why before values? • in case must UNDO • why after values? • in case must REDO
When done? • Log buffer written to log file, when a T commits or log buffer is full • 2 concepts: • entries (updates) to log file vs. • making changes to data on disk
Actions • For atomicity 1) Rollback - make list of committed T's and UNDO uncommitted T's actions • For durability 2) Rollforward - to REDO committed T's actions
Log Entries • In log entry keep track of: 1) committed T's ( C, T#) 2) uncommitted T's - enter writes but not reads 3) Start of Transactions (S, T#)
Example R1(A,50)W1(A,20)R2(C,100)W2(C,50)C2R1(B,50)W1(B,8)C1 1 2 3 4 5 6 7 8 Log entries: 1. (S1) 2. (W,1,A,50,20) 3. (S2) 4. (W,2,C,100,50) 5. (C2) 6. no entry 7. (W,1,B,50,80) 8. (C1) What if crash on operation 7?
How to recover Rollback until log file empty 1. C2 - T2 on committed list 2. (W,2,C,100,50) C2 on list, do nothing 3. (S,2) T2 no longer active 4. (W,1,A,50,20) UNDO this update, T1 not on the committed list 5. (S1) T1 no longer active
How to recover cont’d Rollforward 6. S1 - no action 7. (W,1,A,50,20) T1 uncommitted - no action 8. (S,2) no action 9. (W,2,C,100,50) Redo update 10. (C,2) no action 11. done
Durability • When write log buffer to log file - read after write protocol in performing disk writes guarantees commit fails if error in disk write. • When successful write, durability (write to 2 different disks for extra durability guarantee)
Problems • Potential problems with log files 1) durability - commits successful only once log file written on disk 2) atomicity - Be sure dirty pages of data not written to disk before log entry (if failure and log entry not written?) else will never UNDO or REDO if have to
Solutions Can modify LRU replacement to ensure data not written to disk before log file • updated data page not written to disk until commit - no UNDO processing ever, no before images needed in logs but what if lot of updates? can't keep all in memory • write ahead log guarantee (WAL) log sequence number - every log entry keeps track of smallest LSN written to log file since last write, keep track of updates to data pages (pgmax) cannot write page to disk unless pgmax < LSN
Recovery • Recovery Checkpoints used so only have to rollback to a certain point • T's short-lived (take a few seconds) therefore rollback is quick - only a few T's active that need to be UNDONE • rollforward takes longer - many T's to REDO, keep track of start T's, etc. • 3 approaches for checkpoints
Commit consistent 1. Commit consistent -needed when count of log events exceeds some limit Enter checkpoint state: a) no new T's start until checkpoint complete b) DB processing continues until all existing T's commit and log entries on disk c) current log buffer written to log file, all dirty pages written to disk d) when a)-c) complete, special log entry CKPT entered ( these steps are the same as an orderly shutdown of the system )
Commit consistent cont’d • To recover: Rollback until CKPT, then REDO committed since CKPT • Problems • But what if some transactions are long-lived? • must wait a long-time for them to finish, with no new T's active
Cache-consistent 2. Cache-consistent checkpoint - aim to reduce time if long transactions a) No new T's permitted to start b) existing T's cannot start any new ops c) current log buffer written to disk, all dirty data pages to disk d) log entry (CKPT, list of active T's) written on log file on disk
Cache-consistent cont’d • To recover: • must rollback past checkpoint • rollback the same as for commit consistent, then keep rolling back until list in CKPT all undone (if not committed yet) • Rollback - given the list of active T's, remove those committed then left with a list of T's to UNDO • Rollforward - REDO all updates by committed T's start at first op after last CKPT
Cache-consistent cont’d • Problems: Time to flush dirty pages to disk R1(A,10)W1(A)C1R2(A,1)R3(B,2)W2(A,3)R4(C,5)CKPTW3(B,4)C3R4(B,4)W4(C,6)C4 crash
Fuzzy 3. Fuzzy checkpoint aim to reduce time to perform a checkpoint (CKPTn+1) makes use of 2 checkpoint events; CKPTn-1 and CKPTn a) Prior to checkpoint, remaining pages dirty since b) no new T's starts - existing T's no new ops c) current log buffer written to disk with (CKPTn, list) d) set of pages in buffer that are dirty since CKPTn-t is noted background process makes sure pages changes out on disk by next checkpoint
Fuzzy • Rollforward starts with first log entry following 2nd to last checkpoint log CKPTn-1 ...... CKPTn (start at CKPTn-1) • set of dirty pages since CKPTn-1 will hopefully be written to disk by CKPTn+1 • avoid buffer flushing at time of CKPTn