60 likes | 81 Views
This evaluation focuses on the efficiency of DART in directed vs. random search approaches. It explores the effectiveness of DART with an AC-controller program and the Needham-Schroeder Protocol. Results show DART's capability to detect issues within seconds, contrasting with purely random search methods that may take hours and lack precision. The study also examines DART's effectiveness with a large program like the open-source oSIP library. Experimental data reveals DART's efficiency, particularly in detecting security vulnerabilities and program crashes. The context highlights past research and methodologies related to test-vector generation, symbolic execution, and dynamic test generation processes.
E N D
Evaluation of DART DART: Directed Automated Random Testing Godefroid, Klarlund, Sen
Experimental Goals • Efficiency of DART • directed search approach vs purely random search • AC-controller program • Needham-Schroeder Protocol • Effectiveness with a large program • Open-source oSIP library, 30K LOC of C code
Efficiency Experiment • AC-Controller Program: • DART: • Explores all exec paths upto • depth=1 in 6 iterations and less than 1 second • Depth=2, find assertaion violation, 7 iterations, <1 sec • Random: • Does not find assertion violation after hours • Probability to find inputs leading assertion = 2**64 • Gets stuck in input-filtering code
Another Efficiency Point • Needham-Schroeder security protocol program • 406 lines of C code • DART: Took < 26 minutes on 2GHz machine to detect middle man attack • VeriSoft (model checker): Hours to detect
Effectiveness with Large App • oSIP (open-source) 30K LOC, 600 externally visible functions • DART: • Found a way to crach 65% of oSIP functions within 1000 attempts of each function • Most were deferencing a null pointer sent as an argument to a function
Putting this work into Context • Colby, Godefroid, Jagadeesan 1998: automatically make program self-executable and systematically explore all behaviors • Close program is simplified version • Considerable work on test-vector generation with symbolic exec • Imprecise static analysis • Dynamic Test generation • only generate for specific paths • Do not deal with function calls or library funcs