350 likes | 565 Views
ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 35-36 Instructor Paulo Alencar. Overview. Object-Oriented Metrics Quantitative Quality Model General Classification Object-Oriented Metrics. Sources: object-oriented metrics (online resources) –
E N D
ECE 453 – CS 447 – SE 465 Software Testing & Quality AssuranceLecture 35-36InstructorPaulo Alencar
Overview • Object-Oriented Metrics • Quantitative Quality Model • General Classification • Object-Oriented Metrics Sources: object-oriented metrics (online resources) – Verbruggen, R., Stoecklin, S., etc.
Quantitative Quality Model Quality according to ISO 9126 standard • Divide-and conquer approach via “hierarchical quality model” • Leaves are simple metrics, measuring basic attributes Functionality Error tolerance Reliability Accuracy defect density Efficiency = #defects / size Software Quality Consistency Usability correction time Simplicity Maintainability correction impact Modularity Portability = #components changed ISO 9126 Factor Characteristic Metric
Defined Quality Model Define the quality model with the development team • Team chooses the characteristics, design principles, metrics ... and the thresholds design class as an number of private abstract data-type attributes ]2, 10[ encapsulate all number of public Maintainability Modularity attributes attributes ]0, 0[ number of public methods ]5, 30[ avoid complex interfaces average number of arguments [0, 4[ Factor Characteristic Design Principle Metric
Measure, Metric, Indicator • Measure • provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attributes of a product or process. • Metrics • relates the individual measures in some way. • Indicator • a combination of metrics that provide insight into the software process or project or product itself.
What should be measured? process process metrics project metrics measurement product metrics product What do we use as a basis? • size? • function?
What should be measured? process process metrics project metrics measurement product metrics Process Software Development Process (requirements, design, code, test, implementation) product What do we use as a basis? • size? • function?
What should be measured? process process metrics project metrics measurement product metrics product Product (code, requirements specification, production software, design documentation, risk assessment) What do we use as a basis? • size? • function?
What should be measured? process process metrics project metrics measurement product metrics product What do we • Process Metrics • give insight into the dynamics of a given software process enabling project management to evaluate the efficiency of that process. use as a basis? • size? • function?
What should be measured? process process metrics project metrics measurement product metrics product • Project Metrics • provide software project management with means to measure risk, progress, quality of the project. What do we use as a basis? • size? • function?
What should be measured? process process metrics project metrics measurement product metrics product What do we Product Metrics provide software developers means to measure defects, errors, etc. use as a basis? • size? • function?
Goals for Using OO Metrics • To better understand product quality • To assess process effectiveness • To improve quality of the work performed at the project level
Distinguishing Characteristics of OO Metrics • Localization - OO metrics need to apply to the class as a whole and should reflect the manner in which classes collaborate with one another • Encapsulation - OO metrics chosen need to reflect the fact that class responsibilities, attributes, and operations are bound as a single unit • Information hiding - OO metrics should provide an indication of the degree to which information hiding has been achieved
Distinguishing Characteristics of OO Metrics • Inheritance - OO metrics should reflect the degree to which reuse of existing classes has been achieved • Abstraction - OO metrics represent abstractions in terms of measures of a class (e.g. number of instances per class per application)
Object-Oriented Design Model Metrics • Size (length, functionality) • Complexity (how classes interrelate to one another) • Coupling (physical connections between design elements) • Sufficiency (how well design components reflect all properties of the problem domain) • Completeness (coverage of all parts of problem domain)
Object-Oriented Design Model Metrics • Cohesion (manner in which all operations work together) • Primitiveness (degree to which attributes and operations are atomic) • Similarity (degree to which two or more classes are alike) • Volatility (likelihood a design component will change)
Class-Oriented Metrics • Chidamber and Kemerer (CK) Metrics Suite • weighted metrics per class (WMC) • depth of inheritance tree (DIT) • number of children (NOC) • coupling between object classes (CBO) • response for a class (RFC) • lack of cohesion in methods (LCOM)
Weighted methods per class (WMC) • ci is the complexity of each method Mi of the class • Often, only public methods are considered • Complexity may be the McCabe complexity of the method • Smaller values are better • Perhaps the average complexity per method is a better metric? The number of methods and complexity of methods involved is a direct predictor of how much time and effort is required to develop and maintain the class.
Depth of inheritance tree (DIT) • For the system under examination, consider the hierarchy of classes • DIT is the length of the maximum path from the node to the root of the tree • Relates to the scope of the properties • - How many ancestor classes can potential affect a class • Smaller values are better
Number of children (NOC) • For any class in the inheritance tree, NOC is the number of immediate children of the class • - The number of direct subclasses • How would you interpret this number? • A moderate value indicates scope for reuse and high values may indicate an inappropriate abstraction in the design
Coupling between object classes (CBO) • For a class, C, the CBO metric is the number of other classes to which the class is coupled • A class, X, is coupled to class C if • X operates on (affects) C or • C operates on X • Excessive coupling indicates weakness of class encapsulation and may inhibit reuse • High coupling also indicates that more faults may be introduced due to inter-class activities
Response for class (RFC) • Mci # of methods called in response to a message that invokes method Mi • Fully nested set of calls • Smaller numbers are better • Larger numbers indicate increased complexity and debugging difficulties • If a large number of methods can be invoked in response to a message, the testing and debugging of the class becomes more complicated
Lack of cohesion metric (LCOM) • Number of methods in a class that reference a specific instance variable • A measure of the “tightness” of the code • If a method references many instance variables, then it is more complex, and less cohesive • The larger the number of similar methods in a class the more cohesive the class is • Cohesiveness of methods within a class is desirable, since it promotes encapsulation
Class-Oriented Metrics • Lorenz and Kidd • class size (CS) • number of operations overridden by a subclass (NOO) • number of operations added by a subclass (NOA) • specialization index (SI) • Harrison, Counsel, and Nithi (MOOD) Metrics Suite • method inheritance factor (MIF) • coupling factor (CF) • polymorphism factor (PF)
Design Metrics and Experience • From Mark Lorenz (Case Study) • 1. The average method size should be less than 8 LOC for Smalltalk and 24 LOC for C++. Bigger averages indicate O-O design problems (i.e. function-oriented coding). • 2. The average number of methods per class should be less than 20. Bigger averages indicate too much responsibility in too few classes.
Design Metrics and Experience • 3. The average number of instance variables per class should be less than 6. Similar in reasoning as the previous point - more instance variables indicate that one class is doing more than it should. • 4. The class hierarchy nesting level should be less than 6. Start counting at the level of any framework classes that you use or the root class if you don't. • 5. The number of subsystem-to-subsystem relationships should be less than the average number of class-to-class relationships within a subsystem.
Design Metrics and Experience • 6. The number of class-to-class relationships within a subsystem should be relatively high. • 7. The instance variable usage by the methods within a class can be used to look for possible design problems. • 8. The average number of comment lines should be greater than 1. Smaller averages indicate too little documentation with the (small) methods. • 9. The number of problem reports per class should be low.
Design Metrics and Experience • 10. The number of times a class is reused across the original application and in other applications might indicate a need to redesign it. • 11. The number of classes and methods thrown away should occur at a steady rate throughout most of the development process.
Operation-Oriented Metrics • Average operation size (OSavg) • Operation complexity (OC) • Average number of parameters per operation (NPavg)
OO Design Metrics per Type • Encapsulation • lack of cohesion in methods (LCOM) • percent public and protected (PAP) • public access to data members (PAD) • Inheritance • number of root classes (NOR) • fan in (FIN) • number of children (NOC) • depth of inheritance tree (DIT)
OO Design Metrics per Type • Class complexity • weighted metrics per class (WMC) • coupling between object classes (CBO) • response for a class (RFC)
OO Product Metrics • Number of scenario scripts (NSS) • Number of key classes (NKC) • Number of subsystems (NSUB)