130 likes | 149 Views
This study aims to prevent software faults and improve software quality by analyzing empirical data and providing recommendations for fault prevention and elimination throughout the software life cycle. The study utilizes automated data extraction and classification techniques to identify common patterns and dependencies among fault types. Lessons learned and recommendations are compiled to be used by a pilot project and by the agency.
E N D
Preventing and Eliminating Software Faults through the Life Cycle PI: Katerina Goseva-PopstojanovaStudent: Margaret HamillLane Dept. Computer Science and Electrical EngineeringWest Virginia University, Morgantown, WV E-mail: Katerina.Goseva@mail.wvu.edu
Problem NASA spends time and effort to track problem reports/change data for every project. These repositories are rich, underused sources about the way software systems fail and the software faults that cause these failures. Our goal: Based on systematic and thorough analysis of the available empirical data, build quantitative and qualitative knowledge that contributes towards improvement of software quality by preventing introduction of faults into the system more efficiently eliminating them through the life cycle compiling lessons learned & recommendations to be used by the pilot project and throughout the Agency 2
Approach Explore change tracking systems & automate data extraction Quantify multiple dimensions for each fault type Identify common patterns and unusual dependencies Classify fault/failure data from the pilot project Propose appropriate classification scheme Refine Compile a check list to support avoidance & elimination of different faults types Identify the most frequent fault types & most frequent failure types Compile Lessons Learned & Recommendations document 3
Pilot study: Basic facts • A large NASA mission consisting of millions of lines of code • 21 CSCIs, over 8,000 files • Developed iteratively at two different locations • Each CSCI has multiple releases with added functionality • We analyzed over 2,800 Software Change Requests (SCRs) collected in a period of almost 10 years
Technical challenges Assuring data quality is an important step of any empirical research effort Inaccurate data may lead to misleading observations Both the IV&V team and the project team have been extremely valuable in helping us to understand the change tracking system, determine the meaning of different attributes, and verify the quality of the data The research approach and analysis techniques can be used by any project that tracks problem reports/change data However, due to the lack of a unified change tracking system, some amount of unique work on exploration of the data format and automation of data extraction may be needed
Current capability • Fault localization [SAS 07] • 58-62% of failures map to faults in more than one file • Sources of failures (i.e., type of faults) • Dominating fault types: requirements faults (33%), coding faults (33%), data problems (14%) • Activities when the problem was discovered (e.g., inspection, testing, analysis, on-orbit) • Almost 50% discovered by analysis, only 3% on-orbit • Severity • Around 8% of SCRs are safety critical, less than 1% on-orbit • Lessons learned & recommendations for product and process improvements • Compiled a document for internal use based on our results and the feedback from the IV&V team and project team
Current capability: Identifying the most common fault types • Most common fault types • Requirements faults: 33% • Coding faults: 33% • Data problems: 14% Early life cycle activities (requirements & design): 38.25% Late life cycle activities (coding, interface & integration): 48.57% This finding contradicts the common belief that majority of faults are introduced during early life cycle activities [Boehm et al.1975, Endres 1975, Basili et al.1984]
Current capability: Distribution of fault types Development & testing vs. On-orbit Only 3% of SCRs are on-orbit. Note the logarithmic scale of Y axes.
Current capability: Safety critical SCRs across fault types Only around 8% of all SCRs are safety critical. The highest percentage of safety critical SCRs comes from coding faults (3.60%), followed by requirement faults, design faults & data problems (3.35% total).
Current capability: Lessons Learned / Recommendations • Based on our results and the feedback from the IV&V team and project team we compiled a document for internal use which summarizes the Lessons learned & Recommendations for product and process improvement • Prevent the introduction of faults and improve the effectiveness of detection • Example: Increase effort spent on design and implementation of data repository used to share data between CSCIs • Improve the quality of the data & change tracking process • Example: Ensure the changes to the software artifacts (e.g., requirements, code, etc) made to fix the problem are recorded and can be easily associated with a specific SCR
Planned capability [FY08-10] Classify faults and failures using several additional attributes Conduct more complex, multivariable analysis Continually update Lessons Learned & Recommendations for Improvement document Explore the best ways to prevent and eliminate most common faults and failures throughout the lifecycle; compile the results in a check list Increase awareness of our work so other projects within NASA can benefit from it
Relevance to NASA • Our research is based on a large NASA mission as a pilot study • The research work is done in active collaboration with the IV&V team and the project team • To the best of our knowledge, this is the largest dataset considered so far in the published literature • Quantitative and qualitative results are regularly delivered and discussed with the project • Based on the internal & external validity of our results, we believe that observed trends are intrinsic characteristics of large, complex software systems • Other similar projects across the Agency may benefit • Lessons Learned & Recommendations can be used proactively by newer projects
Acknowledgements We thank the following NASA civil servants and contractors for their valuable support! • Jill Broadwater • Pete Cerna • Susan Creasy • Randolph Copeland • James Dalton • Bryan Fritch • Nick Guerra • John Hinkle • Lynda Kelsoe • Debbie Miele • Lisa Montgomery • Don Ohi • Chad Pokryzwa • David Pruett • Timothy Plew • Scott Radabaugh