1 / 36

Dependable Computing Systems

This presentation provides an overview of dependable computing systems and discusses various fault tolerance techniques, including software fault tolerance. It also highlights the importance of fault tolerance in maintaining system availability and reliability. Join us to learn how to mask and repair faults effectively.

randyc
Download Presentation

Dependable Computing Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dependable Computing Systems Mon Tue Wed Thur Fri 9:00 Overview TP mons Log Files &Buffers B-tree Jim Gray Microsoft, Gray @ Microsoft.com Andreas Reuter International University, Andreas.Reuter@i-u.de 11:00 Faults Lock Theory ResMgr COM+ Access Paths 1:30 Tolerance Lock Techniq CICS & Inet Corba Groupware 3:30 T Models Queues Adv TM Replication Benchmark 7:00 Party Workflow Cyberbrick Party

  2. 100 Tape Transports = 1,000 tapes = 1 PetaByte 1,000 discs = 10 Terrorbytes 100 Nodes 1 Tips The Airplane Rule A two engine airplane has twice as many engine problems. A thousand-engine airplane has thousands of engine problems Fault Tolerance is KEY! Mask and repair faults Internet: Node fails every 2 weeks Vendors: Disk fails every 40 years Here: node “fails” every 20 minutes disk fails every 2 weeks. High Speed Network ( 10 Gb/s)

  3. Outline • Does fault tolerance work? • General methods to mask faults. • Software-fault tolerance • Summary

  4. DEPENDABILITY: The 3 ITIES • Reliability / Integrity: Does the right thing (also large MTTF) • Availability: Does it now. (also large MTTF MTTF+MTTRSystem Availability:If 90% of terminals up & 99% of DB up? (=>89% of transactions are serviced on time). • Holistic vs Reductionist view Integrity / Security Security Integrity / Reliability Reliability Availability Availability

  5. High Availability System ClassesGoal: Build Class 6 Systems Availability 90.% 99.% 99.9% 99.99% 99.999% 99.9999% 99.99999% Availability Class 1 2 3 4 5 6 7 System Type Unmanaged Managed Well Managed Fault Tolerant High-Availability Very-High-Availability Ultra-Availability Unavailable (min/year) 50,000 5,000 500 50 5 .5 .05

  6. Sources of Failures MTTF MTTR Power Failure: 2000 hr 1 hr Phone Lines Soft >.1 hr .1 hr Hard 4000 hr 10 hr Hardware Modules: 100,000hr 10hr (many are transient) Software: 1 Bug/1000 Lines Of Code (after vendor-user testing) => Thousands of bugs in System! Most software failures are transient: dump & restart system. Useful fact: 8,760 hrs/year ~ 10k hr/year

  7. Case Studies - Japan"Survey on Computer Security", Japan Info Dev Corp., March 1986. (trans: Eiichi Watanabe). Vendor (hardware and software) 5 Months Application software 9 Months Communications lines 1.5 Years Operations 2 Years Environment 2 Years 10 Weeks 1,383 institutions reported (6/84 - 7/85) 7,517 outages, MTTF ~ 10 weeks, avg duration ~ 90 MINUTES To get 10 year mttf must attack all these problems

  8. Case Studies -TandemOutage Reports to Vendor Summary Tandem EWR Data Totals: More than 7,000 Customer years More than 30,000 System years More than 80,000 Processor years More than 200,000 Disc Years 1985 1987 1989 Customers 1000 1300 2000 EWR Customers ? ? 267 Outage Customers 176 205 164 Systems 2400 6000 9000 Processors 7,000 15,000 25,500 Discs 16,000 46,000 74,000 Cases 305 227 501 Reports 491 535 766 Faults 592 609 892 Outages 285 294 438 System MTTF 8 years 20 years 21 years Systematic Under-reporting But ratios & trends interesting

  9. Case Studies - Tandem Trends MTTF improved: WOW! Outages per millennium. Shift from Hardware & Maintenance to from 50% to 10% to Software (62%) & Operations (15%) NOTE: Systematic under-reporting of Environment Operations errors Application Software

  10. Case Studies - Tandem TrendsReported MTTF by Component 1985 1987 1990 SOFTWARE 2 53 33 Years HARDWARE 29 91 310 Years MAINTENANCE 45 162 409 Years OPERATIONS 99 171 136 Years ENVIRONMENT 142 214 346 Years SYSTEM 8 20 21 Years Remember Systematic Under-reporting

  11. Summary Current Situation: ~4-year MTTF => Fault Tolerance Works. Hardware is GREAT (maintenance and MTTF). Software masks most hardware faults. Many hidden software outages in operations: New System Software. New Application Software. Utilities. Must make all software ONLINE. Software seems to define a 30-year MTTF ceiling. Reasonable Goal: 100-year MTTF. class 4 today=>class 6 tomorrow.

  12. Outline • Does fault tolerance work? • General methods to mask faults. • Software-fault tolerance • Summary

  13. Key Idea } { } { Architecture Hardware Faults Software Masks Environmental Faults Distribution Maintenance • Software automates / eliminates operators So, • In the limit there are only software & design faults.Software-fault tolerance is the key to dependability. INVENT IT!

  14. Fault Tolerance Techniques FAIL FAST MODULES: work or stop SPARE MODULES : instant repair time. INDEPENDENT MODULE FAILS by design MTTFPair ~ MTTF2/ MTTR (so want tiny MTTR) MESSAGE BASED OS: Fault Isolationsoftware has no shared memory. SESSION-ORIENTED COMM: Reliable messagesdetect lost/duplicate messages coordinate messages with commit PROCESS PAIRS :Mask Hardware & Software Faults TRANSACTIONS: give A.C.I.D. (simple fault model)

  15. Example: the FT Bank Modularity & Repair are KEY: vonNeumann needed 20,000x redundancy in wires and switches We use 2x redundancy. Redundant hardware can support peak loads (so not redundant)

  16. Fault Detect return Repair Fail-Fast is Good, Repair is Needed Lifecycle of a module fail-fast gives short fault latency High Availability is low UN-Availability Unavailability ­ MTTR MTTF Improving either MTTR or MTTF gives benefit Simple redundancy does not help much.

  17. Basic FailFast Designs Triplex Pair Recursive Designs Recursive Availability Designs Triple Modular Redundancy Pair & Spare + + Hardware Reliability/Availability (how to make it fail fast) Comparitor Strategies: Duplex: Fail-Fast: fail if either fails (e.g. duplexed cpus) vs Fail-Soft: fail if both fail (e.g. disc, atm,...) Note: in recursive pairs, parent knows which is bad. Triplex: Fail-Fast: fail if 2 fail (triplexed cpus) Fail-Soft: fail if 3 fail (triplexed FailFast cpus)

  18. : Duplex fail fast TMR: fail fast mttf/2 5/6*mttf mttf/1 mttf/1 mttf/3 mttf/2 mttf/2 0 2 1 1 0 3 2 work work work work work work work Duplex: fail soft TMR: fail soft 1.5*mttf 11/6*mttf mttf/2 mttf/1 mttf/3 mttf/2 mttf/1 1 0 2 1 0 3 2 work work work work work work work Pair & Spare: fail fast 3/4*mttf mttf/4 mttf/1 0 mttf/2 0 4 1 3 2 work work work work work Pair & Spare: fail soft ~2.1*mttf mttf/4 mttf/3 mttf/2 mttf 0 4 3 2 1 work work work work work Redundant Designs Have Worse MTTF! THIS IS NOT GOOD: Variance is lower but MTTF is worse Simple redundancy does not improve MTTF (sometimes hurts). This is just an example of the airplane rule.

  19. 4 TMR: fail fast mttf/2 10 mttf Duplex: fail fast: mttf/3 mttf/1 mttf/2 mtbf/2 2 1 0 1 3 2 0 work work work work work work work mttr mttr mttr mttr mttr 5 4 TMR: fail soft Duplex: fail soft 10 mttf 10 mttf mttf/2 mttf/3 mttf/2 mttf/1 mttf/1 1 0 0 3 2 1 2 work work work work work work work mttr mttr mttr mttf/2 mttr mttr Add Repair: Get 104 Improvement

  20. When To Repair? Chances Of Tolerating A Fault are 1000:1 (class 3) A 1995 study: Processor & Disc Rated At ~ 10khr MTTF Computed Single Observed Failures Double Fails Ratio 10k Processor Fails 14 Double ~ 1000 : 1 40k Disc Fails, 26 Double ~ 1000 : 1 Hardware Maintenance: On-Line Maintenance "Works" 999 Times Out Of 1000. The chance a duplexed disc will fail during maintenance ~ 1:1000 Risk Is 30x Higher During Maintenance => Do It Off Peak Hour Software Maintenance: Repair Only Virulent Bugs Wait For Next Release To Fix Benign Bugs

  21. OK: So Far Hardware fail-fast is easy Redundancy plus Repair is great (Class 7 availability) Hardware redundancy & repair is via modules. How can we get instant software repair? We Know How To Get Reliable Storage RAID Or Dumps And Transaction Logs. We Know How To Get Available Storage Fail Soft Duplexed Discs (RAID 1...N). ? HOW DO WE GET RELIABLE EXECUTION? ? HOW DO WE GET AVAILABLE EXECUTION?

  22. Outline • Does fault tolerance work? • General methods to mask faults. • Software-fault tolerance • Summary

  23. Software Techniques: Learning from Hardware Most outages in Fault Tolerant Systems are SOFTWARE Fault Avoidance Techniques: Good & Correct design. After that: Software Fault Tolerance Techniques: Modularity (isolation, fault containment) Design diversity N-Version Programming: N-different implementations Defensive Programming: Check parameters and data Auditors: Check data structures in background Transactions: to clean up state after a failure Paradox: Need Fail-Fast Software

  24. Fail-Fast and High-Availability Execution Software N-Plexing: Design Diversity N-Version Programming Write the same program N-Times (N > 3) Compare outputs of all programs and take majority vote Process Pairs: Instant restart (repair) Use Defensive programming to make a process fail-fast Have restarted process ready in separate environment Second process “takes over” if primary faults Transaction mechanism can clean up distributed state if takeover in middle of computation.

  25. What Is MTTF of N-Version Program? First fails after MTTF/N Second fails after MTTF/(N-1),... so MTTF(1/N + 1/(N-1) + ... + 1/2) harmonic series goes to infinity, but VERY slowly for example 100-version programming gives ~4 MTTF of 1-version programming Reduces variance N-Version Programming Needs REPAIR If a program fails, must reset its state from other programs. => programs have common data/state representation. How does this work for Database Systems? Operating Systems? Network Systems? Answer: I don’t know.

  26. Why Process Pairs Mask FaultsMany Software Faults are Soft After Design Review Code Inspection Alpha Test Beta Test 10k Hrs Of Gamma Test (Production) Most Software Faults Are Transient MVS Functional Recovery Routines 5:1 Tandem Spooler 100:1 Adams >100:1 Terminology: Heisenbug: Works On Retry Bohrbug: Faults Again On Retry Adams: "Optimizing Preventative Service of Software Products", IBM J R&D,28.1,1984 Gray: "Why Do Computers Stop", Tandem TR85.7, 1985 Mourad: "The Reliability of the IBM/XA Operating System", 15 ISFTCS, 1985.

  27. Process Pair Repair Strategy If software fault (bug) is a Bohrbug, then there is no repair “wait for the next release” or “get an emergency bug fix” or “get a new vendor” If software fault is a Heisenbug, then repair is reboot and retry or switch to backup process (instant restart) PROCESS PAIRS Tolerate Hardware Faults Heisenbugs Repair time is seconds, could be mili-seconds if time is critical Flavors Of Process Pair: Lockstep Automatic State Checkpointing Delta Checkpointing Persistent

  28. How Takeover Masks Failures Server Resets At Takeover But What About Application State? Database State? Network State? Answer: Use Transactions To Reset State! Abort Transaction If Process Fails. Keeps Network "Up" Keeps System "Up" Reprocesses Some Transactions On Failure

  29. PROCESS PAIRS - SUMMARY Transactions Give Reliability Process Pairs Give Availability Process Pairs Are Expensive & Hard To Program Transactions + Persistent Process Pairs => Fault Tolerant Sessions & Execution When Tandem Converted To This Style Saved 3x Messages Saved 5x Message Bytes Made Programming Easier

  30. SYSTEM PAIRSFOR HIGH AVAILABILITY Programs, Data, Processes Replicated at two sites. Pair looks like a single system. System becomes logical concept Like Process Pairs: System Pairs. Backup receives transaction log (spooled if backup down). If primary fails or operator Switches, backup offers service.

  31. SYSTEM PAIR CONFIGURATION OPTIONS Mutual Backup: each has 1/2 of Database & Application Backup Primary Primary Primary Hub: One site acts as backup for many others In General can be any directed graph Primary Primary Primary Primary Backup Primary Primary Backup Primary Backup Stale replicas: Lazy replication Copy Copy Copy Copy Copy Copy

  32. SYSTEM PAIRS FOR: SOFTWARE MAINTENANCE ( B a c k u p ) ( B a c k u p ) ( P r i m a r y ) ( P r i m a r y ) Similar ideas apply to: Database Reorganization Hardware modification (e.g. add discs, processors,...) Hardware maintenance Environmental changes (rewire, new air conditioning) Move primary or backup to new location. V 1 V 1 V 2 V 1 S t e p 2 : B a c k u p i s c o l d - l o a d e d a s V 2 . S t e p 1 : B o t h s y s t e m s a r e r u n n i n g V 1 . ( P r i m a r y ) ( P r i m a r y ) ( B a c k u p ) ( B a c k u p ) V 2 V 1 V 2 V 2 S t e p 4 : B a c k u p i s c o l d - l o a d e d a s V 2 D 3 0 . S t e p 3 : S W I T C H t o B a c k u p .

  33. SYSTEM PAIR BENEFITS Protects against ENVIRONMENT: different sites weather utilities sabotage Protects against OPERATOR FAILURE: two sites, two sets of operators Protects against MAINTENANCE OUTAGES work on backup software/hardware install/upgrade/move... Protects against HARDWARE FAILURES backup takes over Protects against TRANSIENT SOFTWARE ERRORS Commercial systems: Digital's Remote Transaction Router (RTR) Tandem's Remote Database Facility (RDF) IBM's Cross Recovery XRF( both in same campus) Oracle, Sybase, Informix, Microsoft... replication

  34. SUMMARY FT systems fail for the conventional reasons Environment mostly People sometimes Software mostly Hardware Rarely MTTF of FT SYSTEMS ~ 50X conventional ~ years vs weeks Fail-Fast Modules + Reconfiguration + Repair => Good Hardware Fault Tolerance Transactions + Process Pairs => Good Software Fault Tolerance (Repair) System Pairs Hide Many Faults Challenge: Tolerate Human Errors (make system simpler to manage, operate, and maintain)

  35. Key Idea } { } { Architecture Hardware Faults Software Masks Environmental Faults Distribution Maintenance • Software automates / eliminates operators So, • In the limit there are only software & design faults.Software-fault tolerance is the key to dependability. INVENT IT!

  36. References Adams, E. (1984). “Optimizing Preventative Service of Software Products.” IBM Journal of Research and Development. 28(1): 2-14.0 Anderson, T. and B. Randell. (1979). Computing Systems Reliability. Garcia-Molina, H. and C. A. Polyzois. (1990). Issues in Disaster Recovery. 35th IEEE Compcon 90. 573-577. Gray, J. (1986). Why Do Computers Stop and What Can We Do About It. 5th Symposium on Reliability in Distributed Software and Database Systems. 3-12. Gray, J. (1990). “A Census of Tandem System Availability between 1985 and 1990.” IEEE Transactions on Reliability. 39(4): 409-418. Gray, J. N., Reuter, A. (1993). Transaction Processing Concepts and Techniques. San Mateo, Morgan Kaufmann. Lampson, B. W. (1981). Atomic Transactions. Distributed Systems -- Architecture and Implementation: An Advanced Course. ACM, Springer-Verlag. Laprie, J. C. (1985). Dependable Computing and Fault Tolerance: Concepts and Terminology. 15’th FTCS. 2-11. Long, D.D., J. L. Carroll, and C.J. Park (1991). A study of the reliability of Internet sites. Proc 10’th Symposium on Reliable Distributed Systems, pp. 177-186, Pisa, September 1991.

More Related