1 / 28

COMPSCI 230 S2C 2012 Software Design and Construction

COMPSCI 230 S2C 2012 Software Design and Construction. Software Testing, Part 8. Lecture Plan: Software Testing. M 10/9: Goals of software testing; test cases. Myers Ch. 1, pp.1-4. T 11/9 : Psych. and econ. of testing; black-box testing. Myers Ch. 2, pp. 5-11 .

ewa
Download Presentation

COMPSCI 230 S2C 2012 Software Design and Construction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMPSCI 230 S2C 2012Software Design and Construction Software Testing, Part 8

  2. Lecture Plan: Software Testing • M 10/9: Goals of software testing; test cases. Myers Ch. 1, pp.1-4. • T 11/9: Psych. and econ. of testing; black-box testing. Myers Ch. 2, pp. 5-11. • Th 14/9: White-box testing; Myers' principled approach. Myers Ch2, pp. 11-15. • M 17/9: “Software Testing: A real world view”. • Guest lecture by Shelly Mutu-Grigg, BSc Computer Science Auckland 2003 • T 18/9: Myers' principles 2 through 10. Myers Ch. 2, pp. 15-20. • Th 20/9: Basics of Extreme Programming (XP). Myers Ch. 8, pp. 177-183. • M 24/9: Basics of Extreme Testing (XT). Myers Ch. 8, pp. 183-186. • T 25/9: Applied XT and JUnit. Myers Ch. 8, pp. 186-191. • Th 27/9: Famous failures. Testing 7

  3. Learning Goals for Today • Schadenfreude (pleasure derived from the misfortunes of others), with some lessons learned: • Software professionals are trusted to “do the right thing” and to “do no harm”. But we aren’t completely trustworthy: we all make mistakes, and some are unethical. • Safety-critical software can fail catastrophically, even if it is carefully tested. • Enterprise software: failure modes are complex, with no single “cause”, but technical factors are often important. • An overview of the ACM Code of Ethics • What would you do, if you were pressured to “sign off” on a test report for a system that you aren’t confident will be “safe to use”? • What other ethical conflicts might arise in your professional workplace? Testing 7

  4. As an ACM member I will… • The Association for Computing Machinery (ACM) is a prominent professional organisation for computer professionals. • Our other main professional body is the IEEE Computer Society, which has a similar code of ethics. • “The [ACM] Code and its supplemented Guidelines are intended to serve as a basis for ethical decision making in the conduct of professional work. • “Secondarily, they may serve as a basis for judging the merit of a formal complaint pertaining to violation of professional ethical standards.” Testing 7

  5. ACM’s General Moral Imperatives 1.1 Contribute to society and human well-being 1.2 Avoid harm to others 1.3 Be honest and trustworthy … (This is enough for an introductory lecture on professional ethics ;-) Testing 7

  6. Two Case Studies by Sommerville • The following slides are case studies by Ian Sommerville, • Author of Software Engineering, 9th Edition, Addison-Wesley, 2010. • I have corrected a few typos and added some notes (in red). • His original slides are available for download: • http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/CaseStudies/Ariane5/index.html • http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/CaseStudies/LondonAmbulance/index.html • “All material provided on the SE9 website by Ian Sommerville is licensed under a Creative Commons Attribution 2.5 UK: Scotland License. The materials provided here are for educational purposes only and neither the author nor Pearson Education offers any warranties or representations in respect of their fitness for a particular purpose. Testing 7

  7. The Ariane 5 Launcher Failure June 4th 1996 Total failure of the Ariane 5 launcher on its maiden flight

  8. Ariane 5 • A European rocket designed to launch commercial payloads (e.g. communications satellites) into Earth orbit • Successor to the successful Ariane 4 launchers • Ariane 5 can carry a heavier payload than Ariane4 • YouTube Video of the first launch (25 seconds). Longer video (2 min.)

  9. Launcher failure • Approximately 37 seconds after a successful lift-off, the Ariane 5 launcher lost control. • Incorrect control signals were sent to the engines and these swivelled so that unsustainable stresses were imposed on the rocket. • It started to break up and was destroyed by ground controllers. • The system failure was a direct result of a software failure. However, it was symptomatic of a more general systems validation failure.

  10. The problem • The attitude and trajectory of the rocket are measured by a computer-based inertial reference system. This transmits commands to the engines to maintain attitude and direction. • The software failed and this system and the backup system shut down. • Diagnostic commands were transmitted to the engines which interpreted them as real data and which swivelled to an extreme position resulting in unforeseen stresses on the rocket. In the previous lecture, I warned you against using stderr for diagnostics. But Ariane 5 was designed 20 years ago!

  11. Software failure • Software failure occurred when an attempt to convert a 64-bit floating point number to a signed 16-bit integer caused the number to overflow. • There was no exception handler associated with the conversion so the system exception management facilities were invoked. These shut down the software. • The backup software was a copy and behaved in exactly the same way.

  12. Avoidable failure? • The software that failed was reused from the Ariane 4 launch vehicle. The computation that resulted in overflow was not used by Ariane 5. • Decisions were made • Not to remove the facility as this could introduce new faults; • Not to test for overflow exceptions because the processor was heavily loaded. For dependability reasons, it was thought desirable to have some spare processor capacity.

  13. Why not Ariane 4? • The physical characteristics of Ariane 4 (A smaller vehicle) are such that it has a lower initial acceleration and build up of horizontal velocity than Ariane 5. • The value of the variable on Ariane 4 could never reach a level that caused overflow during the launch period.

  14. Validation failure • As the facility that failed was not required for Ariane 5, there was no requirement associated with it. • As there was no associated requirement, there were no tests of that part of the software and hence no possibility of discovering the problem. • During system testing, simulators of the inertial reference system computers were used. These did not generate the error as there was no requirement!

  15. Review failure • The design and code of all software should be reviewed for problems during the development process • Either • The inertial reference system software was not reviewed because it had been used in a previous version; • The review failed to expose the problem or that the test coverage would not reveal the problem; • The review failed to appreciate the consequences of system shutdown during a launch. • My notes: • Ian is not making an ethical judgement here. • We could make a professional judgement, by reference to the ACM guidelines on harm avoidance (on the next slide).

  16. ACM Guidelines on 1.2 Harm Avoidance • “… Well-intended actions, including those that accomplish assigned duties, may lead to harm unexpectedly. • “In such an event the responsible person or persons are obligated to undo or mitigate the negative consequences as much as possible. … • “To minimize the possibility of indirectly harming others, computing professionals must minimize malfunctions by following generally accepted standards for system design and testing. … • “Furthermore, it is often necessary to assess the social consequences of systems to project the likelihood of any serious harm to others. … • “In the work environment the computing professional has the additional obligation to report any signs of system dangers that might result in serious personal or social damage. • “If one's superiors do not act to curtail or mitigate such dangers, it may be necessary to ‘blow the whistle’ to help correct the problem or reduce the risk. • “However, capricious or misguided reporting of violations can, itself, be harmful. Before reporting violations, all relevant aspects of the incident must be thoroughly assessed. In particular, the assessment of risk and responsibility must be credible. • “It is suggested that advice be sought from other computing professionals. See principle 2.5 regarding thorough evaluations.” Testing 7

  17. Lessons learned • Don’t run software in critical systems unless it is actually needed. • As well as testing for what the system should do, you may also have to test for what the system should not do. • Do not have a default exception handling response which is system shut-down in systems that have no fail-safe state. Good idea… but I think it was not standard practice until it was learned the “hard way” on Ariane 5. Yes, of course… this was well-known, but how can we test for all the things a system “should not do”? Obvious in retrospect… but I think it was not standard practice until it was learned the “hard way” on Ariane 5. It is now standard practice to insist that every safety-critical system has a fail-safe state that it will reliably reach. Anyway: the Ariane 5 rocket failed, but did this cause the mission control system to fail?

  18. Lessons learned • In critical computations, always return best effort values even if the absolutely correct values cannot be computed. • Wherever possible, use real equipment and not simulations. • Improve the review process to include external participants and review all assumptions made in the code. Good idea… but I think it was not standard practice until it was learned the “hard way” on Ariane 5. Yes, of course… but eventually you have to “go live” on a test launch! Good idea! It is now routine, in critical design, to review assumptions “before it is too late”. Process improvement (to avoid making the same mistake again) is – I think – a very appropriate response when it’s impossible to “undo” a mistake. Do you agree?

  19. Avoidable failure • The designers of Ariane 5 made a critical andelementary error. • They designed a system where a single component failure could cause the entire system to fail. • As a general rule, critical systems should always be designed to avoid a single point of failure. This is very harsh criticism from Sommerville. Do you think it is justified?

  20. ARIANE 5: Flight 501 FailureReport by the Inquiry Board • “The terms of reference assigned to the Board requested it • to determine the causes of the launch failure, • to investigate whether the qualification tests and acceptance tests were appropriate in relation to the problem encountered, • to recommend corrective action to remove the causes of the anomaly and other possible weaknesses of the systems found to be at fault. • “The failure of the Ariane 501 was caused by • the complete loss of guidance and attitude information 37 seconds after start of the main engine ignition sequence (30 seconds after lift-off). • This loss of information was due to specification and design errors in the software of the inertial reference system. • “The extensive reviews and tests carried out during the Ariane 5 Development Programme • did not include adequate analysis and testing of the inertial reference system or of the complete flight control system, which could have detected the potential failure.” http://www.cs.st-andrews.ac.uk/~ifs/Books/SE9/CaseStudies/Ariane5/SupportingDocs/Ariane5EnquiryReport.html Testing 7

  21. The London Ambulance fiasco • The London Ambulance Service (LAS) Computer Aided Despatch (CAD) system failed dramatically on October 26th 1992 shortly after it was introduced: • The system could not cope with the load placed on it by normal use; • The response to emergency calls was several hours; • Ambulance communications failed and ambulances were lost from the system. • A series of errors were made in the procurement, design, implementation, and introduction of the system.

  22. So, what happened? • Changes to CAC operation made it extremely difficult for staff to intervene and correct the system. • As a consequence, the system rapidly knew the correct location and status of fewer and fewer vehicles, leading to: • Poor, duplicated and delayed allocations; • A build up of exception messages and the awaiting attention list; • A slow up of the system as the messages and lists built up; • An increased number of call backs and hence delays in telephone answering.

  23. Why did it fail? • Technically, the system did not fail on October 26th • Response times did become unacceptable, but overall the system did what it had been designed to do! • Failed 3 weeks later due to a program error - this was a memory leak where allocated memory was not completely released. • It depends who you ask! • Management; • Union; • System manager; • Government.

  24. Lessons learned • Inquiry report makes detailed recommendations for future development of the LAS CAD system, including: • Focus on repairing reputation of CAD within the service; • Increasing sense of ‘ownership’ for all stakeholders; • They still believe that a technological solution is required; • Development process must allow fully for consultation, quality assurance, testing, training; • Management and staff must have total, demonstrable, confidence in the reliability of the system; • Any new system should be introduced in a stepwise approach.

  25. Top Ten Costliest Software Bugs • Not a reliable source, but a fun list! http://top-10-list.org/2010/05/03/ten-costliest-software-bugs/ • Mars Climate OrbiterCrashes • “The contractor who was given the responsibility of planning the navigation system got the specifications from NASA but • instead of using the metric system, • he carried out measurements using imperial units. • “What happened was that the space craft crashed into Mars and over 125 million dollars were lost.” • Ariane 5 Flight 501 (we know this story already ;-) Testing 7

  26. Top Ten Costliest Software Bugs (cont.) • EDS Fails Child Support • “About 6 years back, EDS created an IT system that was quite complex and presented it to the CSA or the Child Support Agency in U.K. • “The software was not compatible with the restructure initiated by the DWP which caused many errors. • “The cost has been estimated at 1 billion dollars till date.” Current score: Tech Errors 2, Mgmt Errors 1. Lowest score “wins” ;-) Testing 7

  27. Top Ten Costliest Software Bugs (cont.) • Soviet Gas Pipeline Explosion • “A CIA operation to sabotage Soviet industry • by duping Moscow into stealing booby-trapped software • was spectacularly successful … • “Leaked extracts in yesterday's Washington Post describe how the operation caused • ‘the most monumental non-nuclear explosion and fire ever seen from space’ in the summer of 1982.” The Telegraph, 28 Feb 2004. Available: http://www.telegraph.co.uk/news/worldnews/ northamerica/usa/1455559/CIA-plot-led-to-huge-blast-in-Siberian-gas-pipeline.html • I’ll let you “score” this one. • Was it an ethical error, a mgmt error, or a tech error for the Soviets? Was it a success in all of these ways for the CIA? Testing 7

  28. Learning Goals for Today • Schadenfreude (pleasure derived from the misfortunes of others), with some lessons learned: • Software professionals are trusted to “do the right thing” and to “do no harm”. But we aren’t completely trustworthy. We all make mistakes, and some of us are unethical. • Safety-critical software can fail catastrophically, even if it is carefully tested. • Enterprise software has complex failure modes, with no single “cause”. Technical factors are sometimes important. • An overview of the ACM Code of Ethics • What would you do, if you were pressured to “sign off” on a test report for a system that you aren’t confident will be “safe to use”? • What other ethical conflicts might arise in your professional workplace? Testing 7

More Related