150 likes | 324 Views
An evaluation of tools for static checking of C++ code. E. Arderiu Ribera, G. Cosmo, S. M. Fisher, S. Paoli, M. Stavrianakou CHEP2000, Padova, 10.02.2000. Origin, purpose and scope.
E N D
An evaluation of tools for static checking of C++ code E. Arderiu Ribera, G. Cosmo, S. M. Fisher, S. Paoli, M. Stavrianakou CHEP2000, Padova, 10.02.2000
Origin, purpose and scope • SPIDER-CCU project (CERN IT-IPT, LHC experiments, IT projects) to define a common C++ coding standard and a tool to automatically check code against it • SPIDER - C++ Coding Standard • 108 rules for naming, coding and style • Tool evaluation • Scope limited to rule checking functionality CHEP2000
Approach and tool selection • Involve potential users of the tool in • the definition of evaluation criteria • the planning of the evaluation • the actual technical evaluations • Take into account time and resource constraints • Preselect tools based on technical merit CHEP2000
Evaluated tools • CodeCheck 8.01 B1 (Abraxas) • QA C++ 3.1 (Programming Research Ltd.) • CodeWizard 3.0 (Parasoft) • Logiscope RuleChecker (Concerto/AuditC++) 3.5 (CS Verilog S.A.) • TestBed 5.8.4 (LDRA Ltd.) CHEP2000
Evaluation environment • Evaluate on real and representative HEP C++ code • GEANT4 toolkit • ATLAS “Event” package • ATLAS “classlib” utility library • chosen because of • complexity • extensive use of STL • variety of style and expertise • familiarity to members of evaluation team CHEP2000
Evaluation criteria • Technical • Coverage of standard • Addition of customised checks • Other relevant configured checks • Support of ANSI C++ standard • Support of template libraries and STL • Robustness • Reliability • Usability • Customisability • Performance CHEP2000
Evaluation criteria (cont’d) • Operational • installation, deployment and upgrade of a centrally supported tool • Managerial • licensing, maintenance costs, vendor information • Other • quality and quantity of documentation (electronic, paper, WWW) • quality of available support CHEP2000
Evaluation results: CodeCheck • limitations in parsing real code making extensive use of STL (no enhancements foreseen) • cumbersome in terms of customisability and implementation of new rules • excluded from further evaluation CHEP2000
Evaluation results: TestBed • limitations in parsing complex code • limited number of built-in rules, no possibility of adding new rules • excluded from further evaluation CHEP2000
Evaluation results: Logiscope RuleChecker • simple, easy to use, fast • limited number of built-in rules • limited possibility of adding new rules • flexibility in report generation and quality • limited by proprietary language (CQL) • excluded from further evaluation CHEP2000
Evaluation results: CodeWizard • at least 71 checks implemented incl. most of items from S. Meyers “Effective C++” and “More Effective C++” • configurable to cover 71% of “SPIDER” standard • customisable in terms of rule selection • customisable in terms of code inclusion/exclusion • ability to parse ANSI C++ with STL • possibility of using RuleWizard for addition of customised checks • not yet usable owing to poor documentation CHEP2000
Evaluation results: CodeWizard (cont’d) • reports in graphical and ASCII format • not customisable • information for headers and libraries necessary • straightforward by using the makefile • repetition of parsing and reporting • performance equivalent to compiler • fully evaluated CHEP2000
Evaluation results: QA C++ • at least 500 checks implemented incl. ISO C++ • configurable to cover 65% of “SPIDER” standard • customisable in terms of rule selection • customisable in terms of code inclusion/exclusion • full STL support foreseen for next release • partial analysis possible via STL stubs provided by the company • easy to learn and use, robust CHEP2000
Evaluation results: QA C++ (cont’d) • information for headers and libraries necessary • possibility of single parsing and caching of headers • makefile integration non trivial • powerful GUI and command line - largely interchangeable • high quality, customisable reports • factor 2 slower performance compared to compiler • fully evaluated • BUT completely new version (full ANSI C++ compliance, new parser) not available at the time of the evaluation CHEP2000
Conclusions • Evaluation process • suited to the goals, pragmatic, efficient • user involvement, careful definition of evaluation criteria and detailed planning essential • Evaluation results • out of five tools considered, two, CodeWizard and QA C++, preselected on technical merit and fully evaluated • final choice to depend on weight given to various features, relative cost, needs of institutes concerned and development of promising new tools (e.g. Together/Enterprise CASE tool and tool by ITC-IRST and ALICE experiment) CHEP2000