110 likes | 223 Views
Center for Information Technology - IRST. Code Based Test Case Generation Nadia Alshahwan, Gordon Fraser, Yue Jia, Kiran Lakhotia, David Schuler, Paolo Tonella. Premise.
E N D
Center for Information Technology - IRST Code Based Test Case Generation Nadia Alshahwan, Gordon Fraser, Yue Jia, Kiran Lakhotia, David Schuler, Paolo Tonella
Premise Different test case generators experience different problems, so we have to distinguish among different classes of generators… • search based generators (SBG) • dynamic symbolic generators (DSG) • explicit state model checkers (ESMC) • hybrid/integrated generators (HYB) • random (RND)
Premise …and generators’ goals. • structural coverage • killing mutants • violating assertions (including generating exceptions or crashes)
Brainstorming the open problems… • P1: string input that satisfy some language and is parsed • P2: unbounded inputs: lists, arrays, (also) strings • P3: complex data structures • P4: loops • P5: method sequences necessary to prepare for testing • P6: mock database, file-system, environment synthesis • P7: mock called functions / services synthesis
Brainstorming the open problems… • P8: propagation of infection to output for killing mutants • P9: code that uses type-dependent constructs (e.g., instanceof) • P10: code that uses reflection • P11: semantically meaningful input and preconditions on input • P12: understandability of the test suite • P13: understandability of assertion violation produced by automatically generated test cases • P14: generation of tests for goals that go beyond coverage
Clustering the problems… • C1: complex data structures: • P1 : string inputs • P2 : unbound inputs • P3 : complex data structure • P11b: preconditions on inputs • C2: complex control flow and code constructs: • P4 : loops • P5 : method sequences • P9 : type-dependent constructs • P10: reflection
Clustering the problems… • C3: automated synthesis of mocks: • P6 : mock database, file-system • P7 : mock called functions / services synthesis • C4: code generation goals: • P8 : propagation of the fault infection • P14: go beyond coverage • C5: understandability of generated tests: • P11a: semantically meaningful inputs • P12: understandability of the test suite • P13: understandability of assertion violation
Promising directions C1: complex data structures C3: automated synthesis of mocks C5: understandability of generated tests
Papers • AUTOMOCK: automated synthesis of a mock environment for test case generation. • A novel approach to generate complex data structures in automated test data generation.
Collaboration Joint research to develop AUTOMOCK
Grant proposal A method to improve the understandability of automatically generated test cases