460 likes | 619 Views
SMU CSE 8314 / NTU SE 762-N Software Measurement and Quality Engineering. Module 14 Software Reliability Models - Part 2. Contents. Requirements Volatility RADC Model Summary. Requirements Volatility [NOT an IEEE Metric, but Similar].
E N D
SMU CSE 8314 / NTU SE 762-NSoftware Measurement and Quality Engineering Module 14 Software Reliability Models - Part 2
Contents • Requirements Volatility • RADC Model • Summary
Requirements Volatility[NOT an IEEE Metric, but Similar] Goal: Determine the stability of the requirements, so you can decide: • How far you really are in your development, • How reliable your software is likely to be, and • What type of process to use for software development
Requirements VolatilityGeneral Rules of Thumb • If requirements are stable, use “waterfall” or similar processes • If requirements are unstable, use incremental or evolutionary development
Requirements VolatilityPrimitive Data Collected R = # of original requirements • In original specification, for example C = # of changes to original requirements A = # of additions to original requirements D = # of deletions from original requirements
Requirements VolatilityEquation V = (C + A + D) / R • Very large V means unstable requirements • You measure periodically to see if things are stabilizing
Requirements VolatilityUsage Notes - 1 • In a mature development effort for a production product, V should flatten out in the design phase, indicating stabilization of requirements • If it continues to rise, it means you have an unstable development and should not be proceeding to later phases yet, unless this is a prototype effort
Requirements VolatilityUsage Notes - 2 • If V is large, the implication is that the current software development effort is really a requirements definition effort, which suggests a prototype, incremental or evolutionary development approach • If intended to be final development, do not go on to next step of process yet
Requirements VolatilityVariation T = Number of “TBD” (“to be determined”) requirements in original specification • This gives you more insight on changes that MUST happen (TBDs) • It also gives more insight on stability over time V = (C + A + D + T) / R
You Can Also Learn a Lot by Graphing the Individual Factors of the Equation R = # of original requirements • In original specification, for example C = # of changes to original requirements A = # of additions to original requirements D = # of deletions from original requirements T = # of TBDs
Requirements Volatility FactorsSample Graph R V T C A D
Thresholds and Targets • The nature of the development determines what thresholds should be established • In a supposedly stable development, thresholds for stability should be very low - instability indicates development effort may be being wasted -- lots of rework ahead Continued ...
Thresholds and Targets • In a development that is expected to be volatile, thresholds might be high and targets would be established to determine when stability has been achieved. • Historical data is essential for establishing reliable thresholds and targets
RADC Measurements Rome Air Development Center US Air Force Rome Air Force Base
RADC Measurements • These are based on a large amount of data collected from U.S.A.F. Projects: • 5 million lines of code • 59 projects • Dating back to 1976 • 24 reliability models were studied • Used as the basis for several government standards
Like IEEE, These Measurements Break the Process into Phases Predict Reliability Estimate Reliability Requirements Design Code Test Start Coding Release Software
Background of RADC Measurements Assumptions: • # of faults is fixed at the start of formal test • # of faults correlates to # of failures (failures are easier to measure and are the things the customer cares about) Goals: • Get the number of faults as low as possible • Predict number of faults as early as possible • Use Predictions to Manage the Process
Basic Approach to RADC Measurements for Reliability(one variant) • Each factor that influences reliability is expressed as a number N 0 < N < 1 N = reliability impact of the individual factor N near 0 means it lowers reliability N near 1 means higher reliability • The product of all these factors is the net reliability • Each factor may be defined as the product of other, more detailed factors
R F1 F2 F3 F21 F22 F23 RADC Concept R = F1 * F2 * F3 F2 = F21 * F22 * F23 Assumptions: Factors are Bayesian, Independent, Homogeneous
Use of RADC Formula - I • At start of project, you compute R and use it as the “current reliability prediction” • As you go through the project, you try to improve the factors represented by the Fis, thus improving the value of R
Use of RADC Formula - II • e.g. if F3 represents programmer capability and it has a value of 0.6, you could improve it to 0.7 or 0.8 by hiring more capable programmers or by training your staff in defect reduction techniques • Eventually, you base your values on actual results rather than on predictions
Reliability Expectation Improves Throughout the Lifecycle Goal (based on Specific System Estimates, based on Actual Code (RE) Predictions, based on Factors Known at This Time (RP)
Note: The Whole Thing Can Also Be Done in Terms of Other Factors • Mean time between failures • Probability of failure • Hazard function • Defects, etc. • Regardless of how it is expressed, the idea is to: • Set goals based on system requirements • Determine the indicators for reliability • Improve early to achieve desired goals
What Factors does RADC Recommend? • RADC has studied many software development efforts and has developed a recommended set of factors to use
Predicted Reliability RP = A * D * S • A, D and S are factors known before you start developing the software
Predicted ReliabilityFactors A = Application type • Similar to Cocomo estimation model • Worse for embedded, real time, etc. D = development environment • Tools, turnaround, etc • Personnel capability also included S = software development methods & process • Factors included for each phase
S = Software Characteristics S = S1 * S2
S = Software Characteristics S1 = Requirements & design methods & process • Structured Analysis, OO, etc. score higher • Less Formal techniques score lower • Process Management is a Big Factor S2 = Implementation methods & process • Language • Coding standards • etc.
S1 = Requirements and Design Methods S1 = SA * ST * SQ
S1 = Requirements and Design Methods SA = Anomaly management • Corrective action process • Risk tracking and contingency • etc. ST = Traceability • Ability to trace design to requirements, etc. SQ = Results of quality reviews • Number of defects found
SQ = Results of Quality Reviews Design SQ = .6 (too low) 27 defects Design Inspection Design Repair SQ = .9 (OK) 3 defects Design Inspection Note: values of SQ shown are illustrations. Actual values depend on size of code, defect definitions. to Next Phase
General Algorithm Si Too Low? Do Something No Go On Redo Yes
S2 = Software ImplementationMethods and Process S2 = SL * SS * SM * SU * SX * SR
S2 = Software ImplementationMethods and Process SL = Language type • Higher order languages are better • Ada better than C due to Discipline, etc. SS = Program size SM = Modularity SU = Extent of reuse SX = McCabe complexity (of Design) SR = Review results (Defects Detected)
Examples of Improving S2 • Reduce design complexity • Reduce number of defects allowed before exiting a review or inspection • Use a better language • Reuse proven code • Make software more modular
Estimation Measurements(Based on Actual Code) F = Failure rate during testing T = Test environment (for Software) E = Operational environment During software test: RE = F * T During system test: RE = F * E
Diagram of Estimation Measurements Predictive Measurements RE = F * T RE = F * E
T = Test Environment TE = Test Effort -- amount of work spent testing TM = Test Methods -- sophistication, etc. TC = Test Coverage -- percent of paths tested, etc. T = TE * TM * TC
E = Operating Environment EW = Workload EV = Input Variability RE = F * E = F * EW * EV
Summary of RADC Measurement Usage - At Start of Project 1) Establish a reliability goal based on the objectives of the product 2) Using organization’s history, characterize the process in terms of the correlation between expected defect or reliability level and the various RADC parameters 3) Use this information to predict the reliability or defect level and use that information to affect the planning process • e.g. use to decide what language, CASE tools, etc.
Summary of RADC Measurement Usage - During Project Execution 4) Track actuals and compare with historical data and plans 5) Adjust behavior if actuals are inadequate to meet goals 6) Record actuals for future use
Summary • Requirements volatility is easy to measure early in the project and it can give you a useful prediction of reliability and stability • RADC and IEEE reliability models use different techniques for different phases of development • RADC uses facts about the development process to predict reliability
References • Bowen, C., et al., Methodology for Software Reliability Prediction, RADC-TR-87-171, Vol I & II, Rome Air Development Center, 1987. • Lyu, Michael R., Handbook of Software Reliability Engineering, IEEE, 1996, Catalog # RS00030. ISBN 0-07-039400-8. • Musa, John, Software Reliability Engineering: More Reliable Software, Faster Development and Testing, McGraw Hill.ISBN: 0-07-913271-5. Or http://members.aol.com/JohnDMusa • Xie, M. Software Reliability Modeling, World Scientific, London, 1991. ISBN 981-02-0640-2.
END OF MODULE 14