330 likes | 475 Views
Will Fault Localization Work For These Failures ?. An Automated Approach to Predict Effectiveness of Fault Localization Tools. Tien-Duy B. Le, and David Lo School of Information Systems Singapore Management University. 29th IEEE International Conference on Software Maintenance.
E N D
Will Fault Localization Work For These Failures ? An Automated Approach to Predict Effectiveness of Fault Localization Tools Tien-Duy B. Le, and David Lo School of Information Systems Singapore Management University 29th IEEE International Conference on Software Maintenance
Fault Localization Tool: A Primer I have calculated the most suspicious location of bugs Give me a failing program My program failed 1) 2) 3) 4) … OK! I will check your suggestion Running Debugging
Will Fault Localization Tools Really Work? • In ideal case: • Faulty statements are within a few suspicious statements, e.g. 10, 20, 30 … 1) 2) 3) 4) Effective … I found the bug Debugging
Will Fault Localization Tools Really Work? • In the worst case: • Faulty statements cannot be found early in the ranked list of statements • Time consuming 1) 2) 3) 4) Effective … Debugging Forever Debugging
Will Fault Localization Tools Really Work? • We build an oracle to predict if an output of a fault localization tool (i.e., instance) can be trusted or not. • If not trusted • Developers do not have to spend time using the output • Developers can revert to manual debugging Trusted or not ? Oracle Ball
Overall Framework Training Stage Effectiveness Labels Spectra Fault Localization Feature Extraction Model Learning 1 2 Suspiciousness Scores Model
Overall Framework • Major Components: • Feature Extraction • 50 features, 6 categories • Model Learning • We extend Support Vector Machine (SVM) to handle imbalanced training data.
Model Learning • Extend off-the-shell Support Vector Machine • Imbalanced training data • #ineffective instances > #effective instances • Extended Support Vector Machine (SVMEXT) Effective instances Maximum Marginal Hyperplane Ineffective instances
SVMEXT • For each effective instance, • We calculate its similarities to ineffective instances • Each instance is represented by a feature vector • Using cosine similarity:
SVMEXT • Sort effective instances based on their highest similarities with ineffective instances (descending) • Duplicate effective instances at the top of the list until training data is balanced. Effective instances selected effective instances Ineffective instances
Overall Framework Deployment Stage Spectra Model Fault Localization Feature Extraction Effectiveness Prediction 3 1 Suspiciousness Scores Prediction
Experiments • We use 10 fold cross validation. • We compute precision, recall and F-measure. .
Effectiveness Labeling • A fault localization instance is deemed effective if: • Root cause is among the top-10 most suspicious program elements • If a root cause spans more than 1 program elements • One of them is in the top-10 • Otherwise, it is ineffective
Dataset • 10 different programs: • NanoXML, XML-Security, and Space • 7 programs from the Siemens test suites • Totally, 200 faulty versions • For Tarantula, among the 200 instances: • 85 are effective • 115 are ineffective
Research Question 1 • How effective is our approach in predicting the effectiveness of a state-of-the-art spectrum-based fault localization tool ? • Experimental setting: • Tarantula • Using Extended SVM (SVMEXT)
Research Question 1 • Precision of 54.36% • Correctly identify 47 out of 115 ineffective fault localization instances • Recall of 95.29% • Correctly identify 81 out of 85 effective fault localization instances
Research Question 2 • How effective is our extended Support Vector Machine(SVMExt) compared with off-the-shelf Support Vector Machine (SVM) ? • Experimental Setting • Tarantula • Using extended SVM (SVMEXT) and off-the-shelf SVM
Research Question 2 • Result SVMEXT outperforms off-the-shelf SVM
Research Question 3 • What are the most important features ? • Fisher score is used to measure how dominant and discriminative a feature is.
Top-10 Most Discriminative Features RD7 RD8 RD6 PE1 PE2 SS1 RD5 RD1 PE4 R1
Most Important Features • Relative Differences Features • C7(1), C8(2), C6(3), C5(7), and C1(8)
Most Important Features • Program Elements • PE1(4), PE2(5), and PE4(9)
Most Important Features • Simple Statistics • SS1(6): Number of distinct suspiciousness scores in {R1,…,R10} • Raw Scores • R1(10): Highest suspiciousness scores
Research Question 4 • Could our approach be used to predict the effectiveness of different types of spectrum-based fault localization tool ? • Experimental setting: • Tarantula, Ochiai, and Information Gain • Using Extended SVM (SVMEXT)
Research Question 4 • F-Measure for Ochiai and Information Gain • Greater than 75% • Our approach can better predict the effectiveness of Ochiai and Information Gain
Research Question 5 • How sensitive is our approach to the amount of training data ? • Experimental setting: • Vary amount of training data from 10% to 90% • Random sampling
Conclusion • We build an oracle to predict the effectiveness of fault localization tools. • Propose 50 features capturing interesting dimensions from traces and susp. scores • Propose Extend. Support Vector Machine (SVMEXT) • Experiments • Achieve good F-Measure: 69.23% (Tarantula) • SVMEXT outperforms off-the-shelf SVM • Relative difference features are the best features
Future work • Improve F-Measure further • Extend approach to work for other fault localization techniques • Extract more features from source code and textual descriptions, e.g., bug reports.
Thank you! Questions? Comments? Advice? {btdle.2012, davidlo}@smu.edu.sg