1 / 46

SMU CSE 8314 / NTU SE 762-N Software Measurement and Quality Engineering

SMU CSE 8314 / NTU SE 762-N Software Measurement and Quality Engineering. Module 14 Software Reliability Models - Part 2. Contents. Requirements Volatility RADC Model Summary. Requirements Volatility [NOT an IEEE Metric, but Similar].

olive
Download Presentation

SMU CSE 8314 / NTU SE 762-N Software Measurement and Quality Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SMU CSE 8314 / NTU SE 762-NSoftware Measurement and Quality Engineering Module 14 Software Reliability Models - Part 2

  2. Contents • Requirements Volatility • RADC Model • Summary

  3. Requirements Volatility[NOT an IEEE Metric, but Similar] Goal: Determine the stability of the requirements, so you can decide: • How far you really are in your development, • How reliable your software is likely to be, and • What type of process to use for software development

  4. Requirements VolatilityGeneral Rules of Thumb • If requirements are stable, use “waterfall” or similar processes • If requirements are unstable, use incremental or evolutionary development

  5. Requirements VolatilityPrimitive Data Collected R = # of original requirements • In original specification, for example C = # of changes to original requirements A = # of additions to original requirements D = # of deletions from original requirements

  6. Requirements VolatilityEquation V = (C + A + D) / R • Very large V means unstable requirements • You measure periodically to see if things are stabilizing

  7. Typical Graph of Volatility

  8. Requirements VolatilityUsage Notes - 1 • In a mature development effort for a production product, V should flatten out in the design phase, indicating stabilization of requirements • If it continues to rise, it means you have an unstable development and should not be proceeding to later phases yet, unless this is a prototype effort

  9. Requirements VolatilityUsage Notes - 2 • If V is large, the implication is that the current software development effort is really a requirements definition effort, which suggests a prototype, incremental or evolutionary development approach • If intended to be final development, do not go on to next step of process yet

  10. Requirements VolatilityVariation T = Number of “TBD” (“to be determined”) requirements in original specification • This gives you more insight on changes that MUST happen (TBDs) • It also gives more insight on stability over time V = (C + A + D + T) / R

  11. Typical Graphs of V

  12. You Can Also Learn a Lot by Graphing the Individual Factors of the Equation R = # of original requirements • In original specification, for example C = # of changes to original requirements A = # of additions to original requirements D = # of deletions from original requirements T = # of TBDs

  13. Requirements Volatility FactorsSample Graph R V T C A D

  14. Thresholds and Targets • The nature of the development determines what thresholds should be established • In a supposedly stable development, thresholds for stability should be very low - instability indicates development effort may be being wasted -- lots of rework ahead Continued ...

  15. Thresholds and Targets • In a development that is expected to be volatile, thresholds might be high and targets would be established to determine when stability has been achieved. • Historical data is essential for establishing reliable thresholds and targets

  16. RADC Measurements Rome Air Development Center US Air Force Rome Air Force Base

  17. RADC Measurements • These are based on a large amount of data collected from U.S.A.F. Projects: • 5 million lines of code • 59 projects • Dating back to 1976 • 24 reliability models were studied • Used as the basis for several government standards

  18. Like IEEE, These Measurements Break the Process into Phases Predict Reliability Estimate Reliability Requirements Design Code Test Start Coding Release Software

  19. Background of RADC Measurements Assumptions: • # of faults is fixed at the start of formal test • # of faults correlates to # of failures (failures are easier to measure and are the things the customer cares about) Goals: • Get the number of faults as low as possible • Predict number of faults as early as possible • Use Predictions to Manage the Process

  20. Basic Approach to RADC Measurements for Reliability(one variant) • Each factor that influences reliability is expressed as a number N 0 < N < 1 N = reliability impact of the individual factor N near 0 means it lowers reliability N near 1 means higher reliability • The product of all these factors is the net reliability • Each factor may be defined as the product of other, more detailed factors

  21. R F1 F2 F3 F21 F22 F23 RADC Concept R = F1 * F2 * F3 F2 = F21 * F22 * F23 Assumptions: Factors are Bayesian, Independent, Homogeneous

  22. Use of RADC Formula - I • At start of project, you compute R and use it as the “current reliability prediction” • As you go through the project, you try to improve the factors represented by the Fis, thus improving the value of R

  23. Use of RADC Formula - II • e.g. if F3 represents programmer capability and it has a value of 0.6, you could improve it to 0.7 or 0.8 by hiring more capable programmers or by training your staff in defect reduction techniques • Eventually, you base your values on actual results rather than on predictions

  24. Reliability Expectation Improves Throughout the Lifecycle Goal (based on Specific System Estimates, based on Actual Code (RE) Predictions, based on Factors Known at This Time (RP)

  25. Note: The Whole Thing Can Also Be Done in Terms of Other Factors • Mean time between failures • Probability of failure • Hazard function • Defects, etc. • Regardless of how it is expressed, the idea is to: • Set goals based on system requirements • Determine the indicators for reliability • Improve early to achieve desired goals

  26. What Factors does RADC Recommend? • RADC has studied many software development efforts and has developed a recommended set of factors to use

  27. Predicted Reliability RP = A * D * S • A, D and S are factors known before you start developing the software

  28. Predicted ReliabilityFactors A = Application type • Similar to Cocomo estimation model • Worse for embedded, real time, etc. D = development environment • Tools, turnaround, etc • Personnel capability also included S = software development methods & process • Factors included for each phase

  29. S = Software Characteristics S = S1 * S2

  30. S = Software Characteristics S1 = Requirements & design methods & process • Structured Analysis, OO, etc. score higher • Less Formal techniques score lower • Process Management is a Big Factor S2 = Implementation methods & process • Language • Coding standards • etc.

  31. S1 = Requirements and Design Methods S1 = SA * ST * SQ

  32. S1 = Requirements and Design Methods SA = Anomaly management • Corrective action process • Risk tracking and contingency • etc. ST = Traceability • Ability to trace design to requirements, etc. SQ = Results of quality reviews • Number of defects found

  33. SQ = Results of Quality Reviews Design SQ = .6 (too low) 27 defects Design Inspection Design Repair SQ = .9 (OK) 3 defects Design Inspection Note: values of SQ shown are illustrations. Actual values depend on size of code, defect definitions. to Next Phase

  34. General Algorithm Si Too Low? Do Something No Go On Redo Yes

  35. S2 = Software ImplementationMethods and Process S2 = SL * SS * SM * SU * SX * SR

  36. S2 = Software ImplementationMethods and Process SL = Language type • Higher order languages are better • Ada better than C due to Discipline, etc. SS = Program size SM = Modularity SU = Extent of reuse SX = McCabe complexity (of Design) SR = Review results (Defects Detected)

  37. Examples of Improving S2 • Reduce design complexity • Reduce number of defects allowed before exiting a review or inspection • Use a better language • Reuse proven code • Make software more modular

  38. Estimation Measurements(Based on Actual Code) F = Failure rate during testing T = Test environment (for Software) E = Operational environment During software test: RE = F * T During system test: RE = F * E

  39. Diagram of Estimation Measurements Predictive Measurements RE = F * T RE = F * E

  40. T = Test Environment TE = Test Effort -- amount of work spent testing TM = Test Methods -- sophistication, etc. TC = Test Coverage -- percent of paths tested, etc. T = TE * TM * TC

  41. E = Operating Environment EW = Workload EV = Input Variability RE = F * E = F * EW * EV

  42. Summary of RADC Measurement Usage - At Start of Project 1) Establish a reliability goal based on the objectives of the product 2) Using organization’s history, characterize the process in terms of the correlation between expected defect or reliability level and the various RADC parameters 3) Use this information to predict the reliability or defect level and use that information to affect the planning process • e.g. use to decide what language, CASE tools, etc.

  43. Summary of RADC Measurement Usage - During Project Execution 4) Track actuals and compare with historical data and plans 5) Adjust behavior if actuals are inadequate to meet goals 6) Record actuals for future use

  44. Summary • Requirements volatility is easy to measure early in the project and it can give you a useful prediction of reliability and stability • RADC and IEEE reliability models use different techniques for different phases of development • RADC uses facts about the development process to predict reliability

  45. References • Bowen, C., et al., Methodology for Software Reliability Prediction, RADC-TR-87-171, Vol I & II, Rome Air Development Center, 1987. • Lyu, Michael R., Handbook of Software Reliability Engineering, IEEE, 1996, Catalog # RS00030. ISBN 0-07-039400-8. • Musa, John, Software Reliability Engineering: More Reliable Software, Faster Development and Testing, McGraw Hill.ISBN: 0-07-913271-5. Or http://members.aol.com/JohnDMusa • Xie, M. Software Reliability Modeling, World Scientific, London, 1991. ISBN 981-02-0640-2.

  46. END OF MODULE 14

More Related