1 / 15

Hardware failures

Hardware failures. Olof B ärring IT/CF. Outline. Failures What fails? How often? When? Repairs How? By who? By when? Conclusions. What fails? and how do we know?. The only things we know for sure about hardware are: It will fail Some of it fails more often than other…

uriel
Download Presentation

Hardware failures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hardware failures Olof Bärring IT/CF

  2. Outline • Failures • What fails? • How often? • When? • Repairs • How? • By who? • By when? • Conclusions CERN IT facility

  3. What fails? and how do we know? • The only things we know for sure about hardware are: • It will fail • Some of it fails more often than other… • disk drives for instance • Monitoring failures • Disks: assume fail-stop but reality more complex • At CERN we base our decision on SMART counters and failed media scans • Monitoring ‘repairs’ rather than ‘failures’: • Vendor tickets (~4k 2010-11) • Changes in serial numbers inventory (~10k 2010-11) CERN IT facility

  4. Failure space • CERN IT by numbers (14/9/2011) CERN IT facility

  5. How often? • Monitoring changes in serial numbers gives an idea Bulk campaigns CERN IT facility

  6. How often? • Monitoring changes in serial numbers gives an idea • Excluding campaigns ~170 disks /month (5 /day) HDD failures/day:5 Hours/day: 24  ~1 fail per 5hrs  MTTF = 320,000 hrs 64,000 drives in the centre (Spec: 1.2Mhrs) CERN IT facility

  7. When? Failure rates of hardware products typically follow a “bathtub curve” with high failure rates at the beginning (infant mortality) and the end (wear-out) of the lifecycle1. 1http://www.usenix.org/events/fast07/tech/schroeder/schroeder.pdf CERN IT facility

  8. When? Process and categorize 2010-11 vendor calls according to ‘Warranty age’ when call was opened 10x disks to CPU servers CERN IT facility

  9. When? Quarterly disk failure rate normalized to number of disks Early failures (infant mortality) CERN IT facility

  10. When? Other failure types • Swappable: RAM, PSU, BBU, BMC, … • Complex repairs: cabling, backplane, mainboard, … no clue… CERN IT facility

  11. Repairs Vendor call New sn: WD3342ABC Alarm CERN IT facility

  12. By who? Vendor CERN IT facility

  13. By when? • Two contract types • ‘Normal’ only used for CPU servers ~30% CERN IT facility

  14. Conclusions • Hardware fails • As expected • More often than expected • MTTF ~320khours rather than 1.2Mhours • When expected: • Effect of early failures (infant mortality) in first year • No sign of wear-out at the end of the 3 years warranty • Repairs are currently carried out by vendor • Missed repair targets in ~30% of cases • Looking at a different model… CERN IT facility

  15. Questions? CERN IT facility

More Related