1 / 43

SWAT: Designing Resilient Hardware by Treating Software Anomalies

SWAT: Designing Resilient Hardware by Treating Software Anomalies. Lei Chen, Byn Choi, Xin Fu, Siva Hari, Man-lap (Alex) Li, Pradeep Ramachandran, Swarup Sahoo , Rob Smolinski, Sarita Adve, Vikram Adve, Shobha Vasudevan, Yuanyuan Zhou Department of Computer Science

cargan
Download Presentation

SWAT: Designing Resilient Hardware by Treating Software Anomalies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SWAT: Designing Resilient Hardware byTreating Software Anomalies Lei Chen, Byn Choi, Xin Fu, Siva Hari, Man-lap (Alex) Li, Pradeep Ramachandran, SwarupSahoo, Rob Smolinski, Sarita Adve, Vikram Adve, Shobha Vasudevan, Yuanyuan Zhou Department of Computer Science University of Illinois at Urbana-Champaign swat@cs.illinois.edu

  2. Transient errors (High-energy particles ) Wear-out (Devices are weaker) Design Bugs Motivation • Hardware will fail in-the-field due to several reasons • Need in-field detection, diagnosis, recovery, repair • Reliability problem pervasive across many markets • Traditional redundancy solutions (e.g., nMR) too expensive Need low-cost solutions for multiple failure sources • Must incur low area, performance, power overhead … and so on

  3. Observations • Need handle only hardware faults that propagate to software • Fault-free case remains common, must be optimized Watch for software anomalies (symptoms) • Zero to low overhead “always-on” monitors Diagnose cause after symptom detected • May incur high overhead, but rarely invoked  SWAT: SoftWareAnomaly Treatment

  4. Checkpoint Checkpoint Fault Error Symptom detected Recovery Diagnosis Repair SWAT Framework Components • Detection: Symptoms of software misbehavior • Recovery: Checkpoint and rollback • Diagnosis: Rollback/replay on multicore • Repair/reconfiguration: Redundant, reconfigurable hardware • Flexible control through firmware

  5. Advantages of SWAT • Handles all faults that matter • Oblivious to low-level failure modes and masked faults • Low, amortized overheads • Optimize for common case, exploit SW reliability solutions • Customizable and flexible • Firmware control adapts to specific reliability needs • Holistic systems view enables novel solutions • Synergistic detection, diagnosis, recovery solutions • Beyond hardware reliability • Long term goal: unified system (HW+SW) reliability • Potential application to post-silicon test and debug

  6. Very low-cost detectors [ASPLOS’08, DSN’08] Low SDC rate, latency Accurate fault modeling [HPCA’09] In-situ diagnosis [DSN’08] Multithreaded workloads [MICRO’09] SWAT Contributions • Application-Aware SWAT • Even lower SDC, latency Checkpoint Checkpoint Fault Error Symptom detected Recovery Diagnosis Repair

  7. Outline Motivation Detection Recovery analysis Diagnosis Conclusions and future work

  8. Simple Fault Detectors[ASPLOS ’08] Fatal Traps App Abort Division by zero, RED state, etc. Hangs High OS Kernel Panic Application abort due to fault Simple HW hang detector High contiguous OS activity SWAT firmware OS enters panic State due to fault • Simple detectors that observe anomalous SW behavior • Very low hardware area, performance overhead

  9. Fault 10M instr If no symptom in 10M instr, run to completion Timing simulation Functional simulation Masked or Potential Silent Data Corruption (SDC) Evaluating Fault Detectors • Simulate OpenSolaris on out-of-order processor • GEMS timing models + Simics full-system simulator • I/O- and compute-intensive applications • Client-server – apache, mysql, squid, sshd • All SPEC 2K C/C++ – 12 Integer, 4 FP • µarchitecture-level fault injections (single fault model) • Stuck-at, transient faults in 8 µarch units • ~18,000 total faults  statistically significant

  10. Metrics for Fault Detection • Potential SDC rate • Undetected fault that changes app output • Output change may or may not be important • Detection Latency • Latency from architecture state corruption to detection • Architecture state = registers + memory • Will improve later • High detection latency impedes recovery

  11. SDC Rate of Simple Detectors: SPEC, permanents • 0.6% potential SDC rate for permanents in SPEC, without FPU • Faults in FPU need different detectors • Mostly corrupt only data

  12. Potential SDC Rate • SWAT detectors highly effective for hardware faults • Low potential SDC rates across workloads

  13. Detection Latency • 90% of the faults detected in under 10M instructions • Existing work claims these are recoverable w/ HW chkpting • More recovery analysis follows later

  14. Exploiting Application Support for Detection • Techniques inspired by software bug detection • Likely program invariants: iSWAT • Instrumented binary, no hardware changes • <5% performance overhead on x86 processors • Detecting out-of-bounds addresses • Low hardware overhead, near-zero performance impact • Exploiting application-level resiliency

  15. Exploiting Application Support for Detection • Techniques inspired by software bug detection • Likely program invariants: iSWAT • Instrumented binary, no hardware changes • <5% performance overhead on x86 processors • Detecting out-of-bounds addresses • Low hardware overhead, near-zero performance impact • Exploiting application-level resiliency

  16. App Address Space Empty App Code Globals Heap Libraries Stack Reserved Low-Cost Out-of-Bounds Detector • Sophisticated detector for security, software bugs • Track object accessed, validate pointer accesses • Require full-program analysis, changes to binary • Bad addresses from HW faults more obvious • Invalid pages, unallocated memory, etc. • Low-cost out-of-bounds detector • Monitor boundaries of heap, stack, globals • Address beyond these bounds HW fault • SW communicates boundaries to HW • HW enforces checks on ld/st address

  17. Impact of Out-of-Bounds Detector Server Workloads SPEC Workloads Lower potential SDC rate in server workloads • 39% lower for permanents, 52% for transients For SPEC workloads, impact is on detection latency

  18. Application-Aware SDC Analysis should not? • Potential SDC  undetected faults that corrupt app output • But many applications can tolerate faults • Client may detect fault and retry request • Application may perform fault-tolerant computations • E.g., Same cost place & route, acceptable PSNR, etc. • Not all potential SDCs are true SDCs • For each application, define notion of fault tolerance • SWAT detectors cannot detect such acceptable changes

  19. Application-Aware SDCs for Server • 46% of potential SDCs are tolerated by simple retry • Only 21 remaining SDCs out of 17,880 injected faults • Most detectable through application-level validity checks

  20. Application-Aware SDCs for SPEC • Only 62 faults show >0% degradation from golden output • Only 41 injected faults are SDCs at >1% degradation • 38 from apps we conservatively classify as fault intolerant • Chess playing apps, compilers, parsers, etc.

  21. Reducing Potential SDCs further (future work) • Explore application-specific detectors • Compiler-assisted invariants like iSWAT • Application-level checks • Need to fundamentally understand why, where SWAT works • SWAT evaluation largely empirical • Build models to predict effectiveness of SWAT • Develop new low-cost symptom detectors • Extract minimal set of detectors for given sets of faults • Reliability vs overhead trade-offs analysis

  22. Reducing Detection Latency: New Definition Fault Bad arch state Bad SW state Detection Recoverable chkpt Recoverable chkpt Old latency New Latency • SWAT relies on checkpoint/rollback for recovery • Detection latency dictates fault recovery • Checkpoint fault-free  fault recoverable • Traditional defn. = arch state corruption to detection • But software may mask some corruptions! • New defn. = Unmasked arch state corruption to detection

  23. Measuring Detection Latency Fault Bad arch state Bad SW state Detection Fault effect masked Symptom Chkpt Chkpt Rollback & Replay Rollback & Replay New latency • New detection latency = SW state corruption to detection • But identifying SW state corruption is hard! • Need to know how faulty value used by application • If faulty value affects output, then SW state corrupted • Measure latency by rolling back to older checkpoints • Only for analysis, not required in real system

  24. Detection Latency - SPEC

  25. Detection Latency - SPEC

  26. Detection Latency - SPEC • Measuring new latency important to study recovery • New techniques significantly reduce detection latency • >90% of faults detected in <100K instructions • Reduced detection latency impacts recoverability

  27. Detection Latency - Server • Measuring new latency important to study recovery • New techniques significantly reduce detection latency • >90% of faults detected in <100K instructions • Reduced detection latency impacts recoverability

  28. Implications for Fault Recovery Recovery • Checkpointing • I/O buffering • Checkpointing • Record pristine arch state for recovery • Periodic registers snapshot, log memory writes • I/O buffering • Buffer external events until known to be fault-free • HW buffer records device reads, buffers device writes “Always-on”  must incur minimal overhead

  29. Overheads from Memory Logging • New techniques reduce chkpt overheads by over 60% • Chkpt interval reduced to 100K from millions of instrs.

  30. Overheads from Output Buffering • New techniques reduce output buffer size to near-zero • <5KB buffer for 100K chkpt interval (buffer for 2 chkpts) • Near-zero overheads at 10K interval

  31. Low Cost Fault Recovery (future work) • New techniques significantly reduce recovery overheads • 60% in memory logs, near-zero output buffer • But still do not enable ultra-low cost fault recovery • ~400KB HW overheads for memory logs in HW (SafetyNet) • High performance impact for in-memory logs (ReVive) • Need ultra low-cost recovery scheme at short intervals • Even shorter latencies • Checkpoint only state that matters • Application-aware insights – transactional apps, recovery domains for OS, …

  32. Fault Diagnosis • Symptom-based detection is cheap but • May incur long latency from activation to detection • Difficult to diagnose root cause of fault • Goal: Diagnose the fault with minimal hardware overhead • Rarely invoked  higher perf overhead acceptable PermanentFault SW Bug Transient Fault ? Symptom

  33. SWAT Single-threaded Fault Diagnosis [Li et al., DSN ‘08] Traditional DMR Synthesized DMR P1 P1 P2 P2 P1 = = Fault-free Always on expensive DMR only on fault • First, diagnosis for single threaded workload on one core • Multithreaded w/ multicore later – several new challenges Key ideas • Single core fault model, multicorefault-free core available • Chkpt/replay for recovery replay on good core, compare • Synthesizing DMR, but only for diagnosis

  34. Symptom No symptom Transient or non- deterministic s/w bug Continue Execution No symptom Permanent h/w fault, needs repair! SW Bug vs. Transient vs. Permanent • Rollback/replay on same/different core • Watch if symptom reappears Faulty Good Symptom detected Rollback on faulty core Deterministic s/w or Permanent h/w bug Rollback/replay on good core Symptom Deterministic s/w bug (send to s/wlayer)

  35. µarch-level Fault Diagnosis Symptom detected Diagnosis Permanent fault Software bug Transient fault Microarchitecture-level Diagnosis Unit X is faulty

  36. Trace Based Fault Diagnosis (TBFD) • µarch-level fault diagnosis using rollback/replay • Key: Execution caused symptom  trace activates fault • Deterministically replay trace on faulty, fault-free cores • Divergence  faulty hardware used  diagnosis clues • Diagnose faults to µarch units of processor • Check µarch-level invariants in several parts of processor • Diagnosis in out-of-order logic (meta-datapath) complex

  37. Trace-Based Fault Diagnosis: Evaluation • Goal: Diagnose faults at reasonable latency • Faults diagnosed in 10 SPEC workloads • ~8500 detected faults (98% of unmasked) • Results • 98% of the detection successfully diagnosed • 91% diagnosed within 1M instr (~0.5ms on 2GHz proc)

  38. SWAT Multithreaded Fault Diagnosis [Hari et al., MICRO ‘09] Core 2 Core 1 Fault Store Load Memory Symptom Detection on a fault-free core Challenge 1: Deterministic replay involves high overhead Challenge 2: Multithreaded apps share data among threads Symptom causing core may not be faulty No known fault-free core in system

  39. mSWATDiagnosis - Key Ideas Multithreaded applications Full-system deterministic replay No known good core Challenges Isolated deterministic replay Emulated TMR Key Ideas A B C D A B C D TA TB TC TD TA TB TC TD TA TA TA

  40. mSWATDiagnosis - Key Ideas Multithreaded applications Full-system deterministic replay No known good core Challenges Isolated deterministic replay Emulated TMR Key Ideas A B C D A B C D TA TB TC TD TC TD TA TB TA TD TC TA TB TC TD TB TA

  41. mSWAT Diagnosis: Evaluation • Diagnose detected perm faults in multithreaded apps • Goal: Identify faulty core, TBFD for µarch-level diagnosis • Challenges: Non-determinism, no fault-free core known • ~4% of faults detected from fault-free core • Results • 95% of detected faults diagnosed • All detections from fault-free core diagnosed • 96% of diagnosed faults require <200KB buffers • Can be stored in lower level cache low HW overhead • SWAT diagnosis can work with other symptom detectors

  42. Very low-cost detectors [ASPLOS’08, DSN’08] Low SDC rate, latency Accurate fault modeling [HPCA’09] In-situ diagnosis [DSN’08] Multithreaded workloads [MICRO’09] Summary: SWAT works! • Application-Aware SWAT • Even lower SDC, latency Checkpoint Checkpoint Fault Error Symptom detected Recovery Diagnosis Repair

  43. Future Work • Formalization of when/why SWAT works • Near zero cost recovery • More server/distributed applications • App-level, customizable resilience • Other core and off-core parts in multicore • Other fault models • Prototyping SWAT on FPGA w/ Michigan • Interaction with safe programming • Unifying with s/w resilience

More Related