300 likes | 641 Views
Chapter 19: Quality Models and Measurements. Types of Quality Assessment Models Data Requirements and Measurement Comparing Quality Assessment Models Measurement and Model Selection. Introduction. Analytical models provide quantitative assessment of selected quality characteristics
E N D
Chapter 19: Quality Models andMeasurements • Types of Quality Assessment Models • Data Requirements and Measurement • Comparing Quality Assessment Models • Measurement and Model Selection
Introduction • Analytical models provide quantitative assessment of selected quality characteristics • Applied over time, provide accurate prediction of future quality • Purpose of measurement and analysis is to • make corrective actions =>improvement • provide timely feedback/assessment • identify problematic areas • prediction, anticipating/planning for scheduling and resource allocation
Models for Quality Assessment • Direct indicators of quality • defect measurements - defect density for correctness • probability of failure-free operation for reliability • measured at end of software development • Indirect indicators of quality • product internal attributes (e.g. KLOC, McCabe’s) • interaction between product and user • development process • general characteristics of product (e.g. telecom) • may be available early enough to make predictions
Generalized Models for Quality Assessment • Require little or no project-specific data • Three categories • Overall model – provides a single estimate of overall product quality • Segmented model – provides different quality estimates for different industrial segments • Dynamic model – provides quality trend or distribution over time or development process
Overall Models • Most general subtype of generalized quality models • Provide a rough estimate of product quality, e.g. defect density = total defects / product size • Lump all products together – abstraction of commonly observed facts about quality generally true over all kinds of application domains, e.g. • 80:20 rule which states 80% of defects are concentrated in 20% of product modules/components • linkages between software defect, risk, process maturity to quality
Segmented Models • Abstraction of commonly observed facts about quality over product market segments, e.g. • reliability levels (measured by failure rate) • safety-critical SW – medical devices and nuclear reactors • commercial SW – telecommunications and business • auxiliary SW – games and low-cost PC SW
Dynamic Models • Provide information about quality over time or development phases, e.g. • defect distribution profile over dev. phases • Putnam model – effort and defect profiles over time • reliability growth during product testing • Can be combined with segmented models to give us segmented dynamic models
Product-Specific Models • Provide more precise quality assessments using product-specific data • Three categories • Semi-customized models – extrapolate product history to predict quality for the current project (Table 2) • Observation-based models – estimate quality based on observations from the current project • Measurement-driven predictive models – establish predictive relations between various early measurements and product quality
Semi-Customized Models • Use general characteristics and historical information about product, process, or envt • Provide quality extrapolations • Examples: • Defect removal models (DRMs) provide defect distribution profile over development phases based on previous releases of the same product • Combine DRM with orthogonal defect classification (ODC) model - profiles defects by individual phases in which they where injected, discovered, and by categories => identify high-defect areas
Observation-based Models • Relate observations of the software system behavior to information about related activities for more precise quality assessments, e.g. • SRGMs – estimate parameters based on observation data • Usually use data from current project
Measurement-driven predictive models • Establish predictive relations between quality and other measurements from historical data • Provide early predictions of quality • Identify problems early for timely actions • Use statistical analysis techniques / learning algorithms • Examples: • Relationships between defect fixes and design and code measurements • high-defect modules of legacy products associated with numerous changes and high data complexity • high-defect modules of new products associated with complex design and control structures
Identify High-risk areas in Development • Relationship between defect fixes and various design and code measurements • High-defect modules of legacy products associated with numerous changes and high data complexity • High-defect modules of new projects associated with complex design and control structures
Model Comparison and Interconnections • Comparisons based on • usefulness of modeling results, how accurate quality estimates are, and applicability of models to different environments • Model inter-connections examined in two opposite directions • Customization required of generalized quality models to create product-specific models • Generalization of product specific models when enough empirical evidence from different products or projects is accumulated
Comparisons • Usefulness can be weighted against cost(such as collecting data) • Generalized models more widely applicable and less expensive to use (do not require product-specific measurements) • Generalized models more useful in product planning stage and early development phases – when product-specific data unavailable, except when historical data exists in which case semi-customized models are better
More Comparisons • Observation-based and Measurement-based predictive models better manage QA activities and later development and maintenance activities as more measurement data collected
More Comparisons • Counterparts in generalized models to product-specific models and vice versa • Generalized models can be customized into product-specific ones • Product-specific models can be generalized • Depends on kind of measurement data collected and analysis results available
Data Requirements and Measurement • Different models have different data requirements (direct and/or indirect) • Generalized models • based on industrial averages and general profiles for all products or product segment. • No data from current project needed directly • But measurement taken at current project can be accumulated into empirical base to calibrate models for future applications
Data Requirements and Measurement for Product-Specific Models • Measurement-driven models • need direct quality measurements and indirect quality measurements (process, product and people) • need early measurements from historical / current releases • Semi-customized models • indirect environmental measurements to characterize current project • extrapolate quality estimates from previous releases • use course-grain activity measures
Data Requirements and Measurement for Product-Specific Models • Observation-based models • direct quality measurements • environmental characteristics assumed
Models Supported by Kinds of Data • Direct and indirect quality measurements from industry form empirical basis for generalized models • Direct quality measurements used in all product-specific models • product-specific extrapolations in semi-customized models • development activities in observation-based models • predicted by early measurements in measurement-driven models
Models Supported by Kinds of Data • Environmental measurements mainly used in semi-customized models • characterize current product to make extrapolations • Product internal measurements used in measurement-driven predictive models • early assessment of product quality • identify problematic areas • Activity measurements used by various models • course-grained used in semi-customized models, e.g. defect data grouped by phase. • fine-grained used in observation-based models • Summarized in Figure 19.3
Selecting Measurements and Models • Use a goal-oriented approach (GQM) • Set specific quality goals (e.g. high reliability) • Choose specific quality assessment models that can answer our concerns (e.g. SRGMs) • Choose appropriate measurements (e.g. failure and test execution time measurements) • Examples A - C in text.