190 likes | 204 Views
Erik Arisholm, Simula et al.: Datafangst/måling EVISOFT forskersamling, Kongsvoll, 18-19 april 2007. - Hva kan vi automatisk fange av data, - Hvordan vi gjør det, og - Hva vi kan bruke det til?. Using fault-proneness prediction models to focus testing in COS.
E N D
Erik Arisholm, Simula et al.: Datafangst/måling EVISOFT forskersamling, Kongsvoll, 18-19 april 2007 - Hva kan vi automatisk fange av data, - Hvordan vi gjør det, og - Hva vi kan bruke det til?
Using fault-proneness prediction models to focus testing in COS Erik Arisholm, Lionel Briand, Valery Buzungu, Magnus Fuglerud, Andreas Gjersøe, Eivind Berg Johannessen
Using fault-proneness prediction models to focus testing in COS • Our hypothesis: Differentiating the test coverage goals among classes according to the class fault proneness can greatly increase testing productivity • Costs: The time required to create test cases, add them to the test suite, run them, check their results and correct any defects • Benefits: Finding more defects early – less defects slipping through to later phases where they might be more costly to detect and fix
Corporate learning Update fault prediction models Package in tool (Eclipse/tree maps) MKS: Collect Release Change and Fault Correction Data Deployment & Training Perform Changes + Record Changes Focused V&V Release Feedback: Fault-prone components Process
Fault-proneness factors included in the prediction models • the structural characteristics of classes (e.g., their coupling) and changes in such structural characteristics since the previous release • the amount of change (requirements or fault corrections) undertaken on the class to obtain the current release • experience of the individual performing the changes • other, unknown factors that are captured by the change and fault history of classes in previous releases
Status • Automated scripts for data collection from MKS and JHawk • Collected data for COS12-COS21 • Evaluated the classification accuracy and cost-effectiveness of alternative prediction models • February-April 2007: Evaluated the practical costs and benefits of using the fault predictions to focus testing in COS 22
Goals (phase 1) • Build a class level fault-proneness prediction model for COS and assess its • Classification accuracy • Potential cost-effectiveness when applying the model to focus verification on future releases.
Fault proneness in an evolving system • The probability that a class will undergo one or more fault corrections in the next release of the system
Class fault-proneness prediction model For each type of CR involving this class - Number of CRs • Lines of code added and deleted in this class • Number of CPs involving this class • Total number of files changed in CRs • Total number of tests failed in CRs • Total number of developers involved in CRs • Total number of past CRs of the developers 1.0 ”Fault in class” threshold Class complexity • Class size • Coupling • Cohesion - … Neural Network (NN) C4.5, Ripper, SVM, Logistic regression, etc ”No fault in class” Class history For each type of CR involving this class for the past three releases: • number of CRs (n-1, n-2, n-3) 0.0
Model building and evaluation COS12-COS19 COS20 66.6% 33.3% = training data set (build a model) = test/evaluation data sets (evaluate classification accuracy and cost effectiveness)
Evaluated data mining techniques • Logistic regression • Neural Networks • Decision Tree (JMP.Partition, C4.5) • Inductive Rule Learning (Ripper) • Support Vector Machines (SVM) • Meta-learners (Boosting, Bagging, Decorate)
Goals (phase 2) • i) The following unit-testing strategy will be applied in COS 22.0: • Just before the normal system test phase starts and after all functionality in the release has been implemented, the developers will write/improve unit tests for an additional two working days • Selection of classes: The top X% most fault-prone classes, for example sorted by density (fault probability/size) • ii) Evaluate the • Costs • Time to create test cases, add them to your test suite, run them, check their results and fix any faults found • Benefits • less costly subsequent testing because more faults can be found in early testing phases (also in future releases since the test suites are reused) • less faults slipping through to the production system (reduce the need for “bugfix” releases)
Unit testing guidelines • For fault-prone classes during unit testing: • Use whatever testing practice is already in place to generate an initial test suite • Use coverage analysis tool (Clover) to identify the statements, branches and loops that are uncovered after executing the test suite • These uncovered statements can fall into three categories: • Unreachable (e.g., dead code): No further action is required, except perhaps removing it (and test the resulting, smaller code) • Changed functionality: The uncovered code corresponds to new or changed functionality and should be entirely covered by the test suite. • Unchanged functionality: The uncovered code should not be affected by the current release changes. However, one should be very careful that this is really the case. It is not always easy to determine the impact of changes. In this case, if there is any doubt that a change could have an impact, it is better to be conservative and ensure that all statements be covered.