270 likes | 404 Views
Software Quality Week 1996. Experience-Driven Process Improvement Boosts Software Quality - Otto Vinter Manager Software Technology and Process Improvement email: ovinter@bk.dk. Experience-Driven Process Improvement Boosts Software Quality. Brüel & Kjaer
E N D
Software Quality Week 1996 • Experience-Driven • Process Improvement • Boosts Software Quality • - • Otto Vinter • Manager Software Technology and Process Improvement • email: ovinter@bk.dk
Experience-Driven Process Improvement Boosts Software Quality • Brüel & Kjaer • Skodsborgvej 307, DK-2850 Naerum, Denmark • Tel: +45 4280 0500, Fax: +45 4280 1405 • High-Precision Electronic Instrumentation for • Sound • Vibration • Condition Monitoring • Gas Measurements
European System and Software Initiative (ESSI) • An Accompanying Measure to ESPRIT • The European Strategic Programme for Research and Development in Information Technologies • ESSI Objectives • Promote Improvements in the Software Development Process in Industry • Improve Current Practice by Applying State-of-the-art in Software Engineering • Evaluate State-of-the-art Supports • Disseminate Experience across Borders and Industrial Sectors • ESSI Lines of Actions • Assessments • Process Improvement Experiments • Dissemination
The PET Process Improvement Experiment • The Prevention of Defects through • Experience-Driven Test Efforts • (PET) • PET Objectives • Extract knowledge on frequently occurring problems in the development process for embedded software • Change the development process by defining the optimum set of methods and tools available to prevent these problems reappearing • Measure the impact of the changes in a real-life development project • Partner in the Consortium: DANFOSS • a leading manufacturer of mechatronic products • performing a similar experiment
Defect Analysis from Error Logs • Error Logs Analysed • Embedded software development projects • Project sizes app. 7 manyears • 1100 bugs analyzed from the error logs • Bugs are anything between serious defects and suggestions for improvements • Bug reporting starts in the integration phase • Bug reports covered a period until 18 months after first release
Defect Analysis from Error Logs • Bug Categorisation • Based on a bug classification scheme by Boris Beizer: • Boris Beizer: Software Testing Techniques, Second Edition, Van Nostrand Reinhold • comprehensive set of bug categories • contains statistics from many projects • categorization performed in teams • 1-2 developers and 1-2 process consultants • app. 5 minutes / bug
The Beizer Bug Classification Scheme • 1. Requirements and Features • 2. Functionality as Implemented • 3. Structural Bugs • 4. Data • 5. Implementation (standards violation, and documentation) • 6. Integration • 7. System and Software Architecture • 8. Test Definition or Execution Bugs • 9. Other Bugs, Unspecified Each category detailed to a depth of up to 4 levels
Defect Analysis from Error Logs • Category Our Analysis Beizer Statistics • 1. Requirements 23,5 % 8,1 % • 2. Functionality 24,3 % 16,2 % • 3. Structural 20,9 % 25,2 % • 4. Data 9,6 % 22,4 % • 5. Implementation 4,3 % 9,9 % (5,9 %) • 6. Integration 5,2 % 9,0 % • 7. Architecture 0,9 % 1,7 % • 8. Test 6,9 % 2,8 % • 9. Unspecified 4,3 % 4,7 % • TOTAL 100,0 % 100,0 %
Defect Analysis from Error Logs • Other Questions to Capture Subjective Information on the Bugs • when was the bug found in the development life-cycle • frequency of bugs found over time • in which part (module) of the product • who found the bug • what could prevent the bug
Defect Analysis from Error Logs • Results of the Analysis • no special bug class dominates embedded software development • requirements problems, and requirements related problems, are the prime bug cause (36%) • problems due to lack of systematic unit testing is the second largest bug cause (22%) • management attention to software process issues • Actions to Improve Unit Testing • introduction of static and dynamic analysis • host/target tools • basic set of metrics
The PET Experiment • Original Objective: • to implement changes to the testing process in the development of the next version of a product and measure the results • Revised Objective: • to assess a trial-release of a product • to increase test coverage to industry best practice (branch coverage > 85%) • to measure the effect after production release • and determine the effectiveness of static/dynamic analysis
Static / Dynamic Analysis Results • 108 Bugs Found before Trial Release • 73 Bugs found by regression testsuite • 66% branch coverage achieved • 105 Person days used • 60 Bugs Found by Static / Dynamic Analysis • 33 Bugs found by static analysis • 27 Bugs found by dynamic analysis • 93% branch coverage achieved • 40 Person days used 46% Improvement in testing efficiency
Results of Static Analysis • Type of Bug Distribution • Use of uninitialised variable: 15 % • Variable defined but not used in scope: 24 % • Variable redefined with no use in between: 36 % • Parameter mismatch: 6 % • Unreferenced procedure: 18 % • Declared but not used variable: 0 % • Other types of static bugs: 0 % • Complexity Metrics • 88 % correlation between procedures with McCabe > 10 and XLOC > 100 • Neither McCabe’s Metric nor Code Size correlated with bugs per line of code
Dynamic Analysis Results • Test System for Dynamic Analysis
Dynamic Analysis Results • # Tested Branches + # Inspected Branches • ------------------------------------------------------------------- >= 85% • # Total Branches - # Dead Branches • Final Coverage 93% • Tested Branches 75% • Dead Branches 9.5% • Inspected Branches 9.5% • Instrumented Code Expansion approximately 40% • Massive Data Output During Execution (1 GB)
Comparison of Test Efficiency • Hours per bug • Static Analysis 1,6 • Current Development 7,2 • Dynamic Analysis 9,0 • Current Maintenance 14,0
Measurements on Production Release • 75% Reduction in Production-Release Bugs • Compared to Trial-Release • 70% Requirements Bugs in Production-Release => Increased focus on improving the requirements process
Conclusions on Static/Dynamic Analysis • Performance Improvement • an efficient way to remove bugs • marginal delay on trial release date • marginal increase in the testing resources required • immediate payback on tools, training & implementation • remarkably improved test coverage • increased quality • reduced maintenance costs • increased motivation • applicable to the whole software development industry, incl. embedded software
In Conclusion • Defect Analysis from Error Logs • is a simple and effective way to assess the software development process • The Analysis of Bugs • has had a significant impact on the way we now look at our software development process • has established a basic set of metrics for test activities • starting point for process improvement programmes in companies