250 likes | 354 Views
Software Measurement. João Saraiva HASLab / INESC TEC & Universidade do Minho, Portugal. 2012. Measurement. “To measure is to know” "If you cannot measure it, you cannot improve it." Lord Kelvin “You can't control what you can't measure” Tom de Marco
E N D
Software Measurement João Saraiva HASLab / INESC TEC & Universidade do Minho, Portugal 2012
Measurement “To measure is to know” "If you cannot measure it, you cannot improve it." Lord Kelvin “You can't control what you can't measure” Tom de Marco “Not everything that counts can be counted, and not everything that can be counted counts.” Albert Einstein
Why Measure Software?! • Understand issues of software development • Make decisions on basis of facts rather than opinions • Predict conditions of future developments
What to measure in software Effort measures • Team size • Cost • Development time Quality measures • Number of failures • Number of faults • Mean Time Between Failures
Cost Model Purpose: estimate in advance the effort attributes (development time, team size, cost) of a project Problems involved: • Find the appropriate parameters defining the project (making sure they are measurable in advance) • Measure these parameters • Deduce effort attributes through appropriate mathematical formula
The Constructive Cost Model: COCOMO COCOMO is an algorithmic software cost estimation model developed by Barry W. Boehm. The model uses a basic regression formula with parameters that are derived from historical project data and current project characteristics.
COCOMO Algorithm COCOMO computes software development effort as function of program size and a set of 15 "cost drivers" attributes; Each attribute receives a rating on a six-point scale that ranges from "very low" to "extra high"; An effort multiplier applies to this rating.
COCOMO algorithm The product of all effort multipliers results in an effort adjustment factor (EAF). The COCOMO formula is: Where, E is the effort in person-months, KLoC is the estimated number of lines of code for the project.
COCOMO algorithm The coefficient ai and the exponent bi are given in the table: "small" teams with "good" experience "medium" teams with mixed experience Project with "tight" constraints
Project parameters Elements that can be measured in advance, to be used in the cost model: • Lines of code (LOC, KLOC) • McCabe's Cyclomatic Complexity • Function points • Application points
Lines of Code (LOC) Pros as a cost estimate parameter: • Appeals to programmers • Fairly easy to measure on final product • Correlates well with other effort measures Cons: • Ambiguous (several instructions per line, count comments or not, reused code, etc) • Does not distinguish between programming languages of various abstraction levels • Low-level, implementation-oriented • Difficult to estimate in advance.
McCabe's Cyclomatic Complexity This metric is an indication of the number of 'linear' segments in a function/procedure/method (i.e. sections of code with no branches) and therefore can be used to determine the number of tests required to obtain complete coverage. It can also be used to indicate the psychological complexity of a method. A function with no branches has a Cyclomatic Complexity of 1 since there is one arc. This number is incremented whenever a branch is encountered. We usually consider statements that represent branching as: 'for', 'while', 'do', 'if', 'case' (optional) and the ternary operator (optional).
Object Oriented Software Measures • Weighted Methods Per Class (WMC) This metric is the sum of complexities of methods defined in a class. It therefore represents the complexity of a class as a whole and this measure can be used to indicate the development and maintenance effort for the class. We may measure of complexity of methods via McCabe's Cyclomatic Complexity metric.
Object Oriented Software Measures • Depth of Inheritance Tree of a Class (DIT) DIT is the maximum length of a path from a class to a root class in the inheritance structure of a system. DIT measures how many super-classes can affect a class. • Coupling Between Objects (CBO) CBO is the number of other classes that a class is coupled to.
Object Oriented Software Measures • Number of Children (NOC) NOC is the number of immediate subclasses (children) subordinated to a class (parent) in the class hierarchy. • Response for a Class (RFC) RFC is the set of all methods that can be invoked as a result of a message sent to an object of the class.is the number of other classes that a class is coupled to. And counting...
Function Points A software application is in essence a defined set of elementary business functions. A function point is not a screen, a report, but instead an elementary business process. Function Points Analysis: Is a structured technique of classifying components of a system. It breaks systems into smaller components, so they can be better understood and analyzed.
Function Points: Mechanics Following a set of prescribed rules break the applications into parts. – External Inputs – External Outputs – External Inquiries – Internal files and external files. As part of the process determine the interaction between components
Function Points Pros as a cost estimate parameter: • Relates to functionality, not just implementation • Experience of many years (ISO standard) • Can be estimated from design • Correlates well with other effort measures Cons: • Oriented towards business data processing • Fixed weights
Application Points high-level effort generators (screen, reports, high-level modules) Pros as a cost estimate parameter: • Relates to high-level functionality • Can be estimated very early on Cons: • Remote from actual program
Complexity Models Objective: Estimate complexity of a software system - Lines of code - Function points - Halstead’s volume measure (N log η, where N is program length and η the program vocabulary (operators + operands)) - OO measures - McCabe’s cyclomatic
SIG Software Quality Model Accessing the quality of a new program or software application consists of: - Computing the generic and specific metrics on the program - Assigning a star to the program according to the aggregated results of the pre-computed star ranking.