1 / 20

The Effort Distribution Pattern Analysis of Three Types of Software Quality Assurance Activities and Process Implication

The Effort Distribution Pattern Analysis of Three Types of Software Quality Assurance Activities and Process Implication: an Empirical Study. Qi Li University of Southern California-CSSE Ye Yang, Fengdi Shu Institute of Software, Chinese Academy of Sciences COCOMO Forum, MIT, Nov. 4 th 2009.

lewis
Download Presentation

The Effort Distribution Pattern Analysis of Three Types of Software Quality Assurance Activities and Process Implication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Effort Distribution Pattern Analysis of Three Types of Software Quality Assurance Activities and Process Implication: an Empirical Study Qi Li University of Southern California-CSSE Ye Yang, Fengdi Shu Institute of Software, Chinese Academy of Sciences COCOMO Forum, MIT, Nov. 4th2009 7/13/2014 1

  2. Background • Three main Quality Assurance Activities: • Review • Process Audit • Testing • Complementary and can’t be replaced by each other • Early reviewing will save time in later fixing • Reviewing is hard to tell system’s reliability and performance • Testing serves as a hands-on experience of the actual and operational system • Process Audit: although not directly influences software product, but right process leads to right product 7/13/2014 2

  3. Related Work • COQUALMO: • The efficiency of review, testing, and automated analysis are compared by two-round Delphi • people review is the most efficient on removing requirement and design defects • testing is the most efficient on removing code defects • Capers Johns: (“Applied Software Measurement”) • lists Defect Removal Efficiency of 16 combinations of 4 defect removal methods: design inspections, code inspections, quality assurance (corresponds to process audit in our study), and testing • no single defect removal method is adequate • removal efficiency from better to worse would be design inspections, code inspections, testing and process audit 7/13/2014 3

  4. Related Work (Cont.) • Both of them are based on Delphi estimation: • These results are generally from expert Delphi estimation and lacks quantitative verification • Both of them are based on the assumption: • That all these defect removal strategies have been done sufficiently, but in practice, they might be done insufficiently due to many reasons • From an economic aspect, the ROI might be negative if they have been done sufficiently • Don’t give clue on effort allocation guidelines for QA activities • Don’t address how much reviewing, process auditing , and testing is enough from a value-based perspective 7/13/2014 4

  5. Motivation • How do we know which type of QA activity is insufficient while others might be overdone • A “Diagnostic” Model • How to balance the effort allocation among them? • To improve the process and resource allocation efficiency • Maximize the ROI of the entire quality assurance • It is an ongoing work and welcome any comments, discussion and cooperation in the future

  6. Case Study Background • Organization: • An R&D organization at ISCAS in China • Appraised and rated at CMMI maturity level 4 in 2005 • A research group focused on software process improvement • Project: • SoftPM: a tool used to manage software project and has been deployed to many software organizations in China • Iterative development method and CMMI process management. • Every version shares the same quality goal • Test cases coverage rate is 100% • All planned test cases executed and passed • No non-trivial defects are detected during at least one day of continuous testing • Satisfy the quality goal of 0.2 defects/KLOC when released 7/13/2014 6

  7. Empirical Data Analysis Procedure • Data Collection: • Extract raw data from related plans, records and reports • Consolidate the original information • Resolve the inconsistency • Data Analysis: Goal/ Question/Metric (GQM) Method • Goal: Improve the ROI of QA activities • Question 1: How to identify which QA activities might be insufficient while other might be overdone? (Q1) • Metric 1: Regression Diagnostic Model (M1) • Question 2: How to improve reviewing? (Q2) • Metric 2: Filter Effectiveness (M2) • Question 3: How to improve process audit? (Q3) • Metric 3: NCs Distribution (M3) • Question 4: How to improve testing? (Q4) • Metric 4: Value-based Software Testing: Feature Prioritization (M4)

  8. Effort Distribution of Three QA Activities Effort and its Percentages of Each QA Activity E_Re: Review effort E_PA: Process Audit effort E_Test: Testing effort E_Total: Total effort for the project Correlation between Effort Percentages of Three QA Activities

  9. Effort Distribution of Three QA Activities Regression Analysis: M1: E_Test%= 0.73-2.69*E_Re%-15.73*E_PA%

  10. Effort Distribution of Three QA Activities M1: E_Test%= 0.73-2.69*E_Re%-15.73*E_PA% • How to explain the regression model? • Coefficients: -2.69, -15.73 • Increase review effort by 1% =>save testing effort by 2.69% • Increase process audit effort by 1%=>save testing effort by 15.73% • Constant : 0.73 • If E_Re% and E_PA% are both equal to 0, which means no reviewing and process audit activites • The testing will take up as high as 73% of the total effort

  11. Effort Distribution of Three QA Activities M1: E_Test%= 0.73-2.69*E_Re%-15.73*E_PA% • What the regression model tells? • Review and Process Audit might not be done sufficiently • ROI of putting more effort on the review is: (2.69%-1%)/1%=1.69 • ROI of putting more effort on process audit is: (15.73%-1%)/1%=14.73 • Process Implication • Put more effort on review especially process audit would be more cost-effective • Diagnostic Model (M1) • This regression model could be seen as a diagnostic model to identify which QA activities have been done adequately, while others might still insufficient (Q1)

  12. Q2: How to Improve the Reviewing • M2: Filter Effectiveness: defects found in this phase/ (defects found in this phase+ defects introduced in this phase and found by testing) • Filter Effectiveness of Each Phase by Reviewing • Process Implication: • Filter Effectiveness decreases • Especially in coding phase, as low as 0.01 in V3.0 • Implies that the development team relies highly on the testing group • More effort should be put on code walkthrough and unit testing by reviewing

  13. Q3: How to Improve Process Audit? • M3: NCs (Not Compatibles) Distribution • Process Implication: • PMC (Project Monitoring & Controlling) and PC (Project Planning) NCs consist of the most NCs • Main problem for PMC is that developers often neglect submitting their periodical reports in time • The most common problem for PP is that planned and actual effort is always displaying a huge difference • Put more auditing effort on PMC and PC

  14. Testing Complements Review • Functional defects take the most part =>insufficient effort on early review especially insufficient code walkthrough • Defects related to user interface and performance are hard to detect by reviewing artifacts but can be more easily detected by testing • Testing activity serves to complement review activity • Q4: How to improve the Testing? • M4: Value-based Software Testing: Feature Prioritization (M4) • Qi Li et al. “Bridge the Gap between Software Test Process and Business Value: A Case Study” . ICSP 2009: 212-223

  15. Other Findings: Cost of Finding and Removing Defects in Each Phase Hrs/Def_Req (Des): Hours spent on finding and removing per defect in requirement (Design)phase by reviewing Hrs/Def_Test: Hours spent on finding and removing per defect by testing • Process Implication: • Adding more effort on early review and defects finding is more cost-effective • Further empirically validate Dr. Boehm’s summary of surveys in TRW, IBM and GTE

  16. Discussion of Validity and Future Work • The linear regression diagnostic model might not really reflect the utility function of QA activates • Apply Indifference Curve from Microeconomics to build model to deal with the QA effort allocation • Need more data to build and validate the model Utility Function of QA activities 7/13/2014 16

  17. Conclusion • This procedure of these analysis based on G/Q/M methodology would help quality managers (under a specified organizational context): • identify which type of QA activity is insufficient while others might be overdone • how to identify and improve the weakest part of QA activities • how to balance the effort allocation and planning for future projects • The data analysis process is simple and practical • Provide process improvement implication & suggestion and finally improve the whole process effectiveness under the specified organizational context 7/13/2014 17

  18. Thank you Questions or Comments?

  19. Data Source (Cont.) Defects found in Review Activities Def_Re_Req: Defects found in requirement phase by reviewing Def_Re_Des: Defects found in design phase by reviewing Def_Re_Cod: Defects found in coding phase by reviewing De_Re_Total: Total defects found by reviewing 7/13/2014 19

  20. Data Source (Cont.) Defects found in Testing Activities Def_Intro_Req: Defects introduced in requirement phase and found by testing Def_Intro_Des: Defects introduced in design phase and found by testing Def_Intro_Cod: Defects introduced in coding phase and found by testing Def_Test_Total: Total defects found by testing

More Related