160 likes | 344 Views
Availability in CMPs. By Eric Hill Pranay Koka. Motivation. RAS is an important feature for commercial servers Server downtime is equivalent to lost money Investigate the feasibility of previously proposed SMP availability schemes in the context of CMPs. Availability.
E N D
Availability in CMPs By Eric Hill Pranay Koka
Motivation • RAS is an important feature for commercial servers • Server downtime is equivalent to lost money • Investigate the feasibility of previously proposed SMP availability schemes in the context of CMPs
Availability • Focused on Backward Error Recovery (BER) schemes • System periodically checkpoints state • Roll back to previously validated checkpoint upon fault detection • Design Choices • When to checkpoint • How to checkpoint • Where to store checkpointed state
BER Taxonomy (When) • Checkpoint Consistency • Global • Processors synchronize in physical time to create checkpoints • Synchronization latency could be expensive • Coordinated Local • Processors synchronize in logical time to create checkpoints • Uncoordinated Local • Processors create checkpoints without synchronization
BER Taxonomy (How) • Flushing • At checkpoint creation, processor and cache state is dumped to memory • Incremental Logging • Within a checkpoint interval, changes to cache state are recorded. Processor state is logged at checkpoint interval boundary
BER Taxonomy (Where) • On-Chip • Consumes transistors which could be used for caching or computation • Logging/Restoring of checkpoint state easier • Off-Chip • No storage overhead on actual CMP • Logging/Restoring of state consumes bandwidth
ReVive • Global Checkpoint Consistency • Synchronization time should be reduced for a single CMP • Flushing of data • Consumes bandwidth • Off-chip checkpoint storage
SafetyNet • Coordinated Local Checkpointing • Global logical clock distributed across system • Incremental Logging of Data • Incrementally record changes in cache state • On chip storage • Consumes transistors which could be used for processors/cache
Experiments • Estimation of CLB size for SafetyNet in a CMP • Placed counters in ruby to record update-actions • CLB overhead = (CLB entries created/interval) * (5 outstanding checkpoints) * (72 bytes per entry) • Estimation of ReVive flushing overhead • Counted how many cache lines need to be flushed per interval • Estimation of I/O buffering overhead • Measured disk and network I/O for 2 workloads by modifying Linux kernel
Design Assumptions • Assumed inclusive 2 Level memory hierarchy with write-back L1 & L2 caches • Logging must be done at both levels of memory hierarchy • Write-through L1 caches only require logging at 1 level • L2 has knowledge of every store performed in private L1s • Exclusion also requires logging at both levels
SafetyNet Results 16 MB L2 cache – 100K cycle checkpoint interval, 25 transactions
SafetyNet Results • L2 CLB overhead is normalized to transactions • CLB size calculation is related to the CLB overhead per checkpoint interval • As more processors are added, throughput (transactions/interval) increases • Though the overhead cost per transaction decreases as more processors are added to a CMP, the absolute cost is still increasing
Performance • CLBs are sized to handle bursty, not average case • Worst case was 4.5 MB of overhead, for zeus 16 proc workload • Resized L2 cache to 12 MB by removing way of associativity • Resulted in 11.75% reduction in IPC • Large number of cache lines need to be flushed by ReVive • Should suffer more performance degradation than SafetyNet
What About I/O? • Conventional wisdom – Buffer I/O • Buffering requirements are a function of validation latency. • I/O data rates on a 1p system
Conclusions • SafetyNet should perform better than ReVive • Storage overhead is still a problem • For small CMPs I/O buffering should not be a problem • More workloads still need to be studied • This may not be true for CMPs with large numbers of processors