80 likes | 165 Views
Report of results of technical session 2: The ETICS build process and metrics collection. Current experience in Grid Projects. Sometimes developers don’t feel the importance of testing since the beginning of a project Missing a test plan, unit tests and so on
E N D
Report of results of technical session 2: The ETICS build process and metrics collection
Current experience in Grid Projects • Sometimes developers don’t feel the importance of testing since the beginning of a project • Missing a test plan, unit tests and so on • Some developers use their tools and methodologies to do the test (unit testing) • Many free tools are available but most of them are bugged • We need to provide guidelines, good examples and best practices
Current ETICS Building Process ETICS Templates, Examples, Guidelines Sun java conventions (checkstyle, jalopy), user-defined rules, same approach for other languages, enforce the rules/report if broken (change the code?) init checkstyle Test Plan compile JavaCov adds probes, compiles (code and test code), runs unit tests (junit, GJTester) (enable/disable code is slower with instrumentation) test Test cases (static analysis, dependencies analysis) Give possibility of expressing acceptance criteria per project Integration testing(EBIT) System integrationtesting (mock objects)
Collecting test results • We need a common schema to express the results • Result converters provided by ETICS for a few common tools • Developers will provide other converters • The results will be collected and stored in a repository for later processing
Metrics (1) • Lines of code, defects/lines of code (interactions with bug systems?) • Code fragility/robustness (number of up-level/down-level dependencies) • Number of external dependencies • Complexity (per package, per class) • Check CMMi requirements • Historical reporting on data, trend analysis • Possible range of values -> recommended values depending on typical scenarios • Compare projects based on metrics, benchmarking, rank project
Metrics (2) • Set goals and monitor metrics over time • Need to define a schema to express metrics • More feasible with static metrics, dynamic metrics have too many liabilities and/or dependencies • Code coverage is in itself a metric. Refers to the quality of testing more than the quality of code, but it’s possible to infer that the code is good as well as a consequence
Tools • For Java there are lots of free tools that can be incorporated • For other languages there are not so many tools • Commercial tools (for C/C++,..) • GRAMMATECH (Codesurfer, not a free tool) • CTA++ (Testwelloy, unit testing tool), CTC++ (coverage analysis), gcov • SLOCcount, other line counters
Timelines • Start with JUnit and JavaCov • Provide software and documentation (WP4) - 2 weeks • Design schemas for the results, possibly implement converters if the output format is not suitable (TBD) • Design a generic wrapper with plug-in for the different tools (TBD – end of July?) • Prototype implementation – September? • Collection of metrics and reporting/analysis is left for next year (it will not be part of the first release). • We should verify how complex is to call some of the existing tools. • The most complex issue is to provide useful targets/interpretation of the metrics