610 likes | 627 Views
Learn to measure and assess the internal attributes, size, structure, and quality of software products. Explore methodologies for software size measurement and analysis for improved productivity and efficiency in software development projects.
E N D
Learning outcomes 1 Able to measure internal product attribute. Able to measure the size of product. 2 3 Able to measure the structure of product. 4 Able to measure the software quality. 5 Able to classify and determine size, structure and software quality. 6 Benefits
Aspect of Software Size Simple measure of size are often rejected because they do not adequately reflect: • Effort: they fail to take account redundancy and complexity. • Productivity: they fail to take account of true functionality and effort. • Cost: they fail to take account of complexity and reuse.
Aspects of Software Size We suggest Soft size can be described with four attributes: • Length – Physical size of the product • Functionality – amount of functions supplied by the product • Complexity –Interpreted in different ways iv. Reuse- Extent to which product was copied or modified from previous version of existing product.
Aspects of Software Size • Problem Complexity – measures the complexity of underlying problem • Algorithmic complexity – complexity of the algorithm implemented to solve problem, complexity measures the efficiency of software • Structural complexity – Structure of software used to implement the algorithm • Cognitive complexity – effort required to understand the software
2. LENGTH: We want to measure size of Code, Design & Specification 2.1 CODE:- produce by programming language 2.1.1 Traditional Code Measures:- A) LOC : LOC = CLOC + NCLOC B) Textual Code:It includes • Blank line • Comment • Data declaration • Line that contain several separate instructions C) Non-Textual Code: It includes GUI of a system
2. LENGTH • What is our smallest/largest/average project? • What is our productivity? • What are the trends in project length over time? Other organization measure length: • What is the largest/smallest/average module length? • Does module length influence number of faults? 2.1.2 Dealing with non-textual or external code: • Language dependency that cause a problem • Software code consist purely of text • Visual programming & windowing environment is changing dramatically our notion of what software program is.
2. LENGTH 2.2 Specification & Design • Specification or design consist of text, graphs, symbols & mathematical diagrams • When measuring code size, it is expected to identify atomic object to count LOC, executable statements, instructions, objects & methods • Identify atomic object to count – no. of sequential pages (text & diagrams togetherly form a sequential pages) as well as different types of diagrams & symbols • We can view the length as a composite measure – text length & diagram length • Way to handle well-known methods:- • Atomic objects for data flow diagrams (DFD) are processes (bubble nodes), external entity (box), data stores & data flow (arcs) • Atomic entity for algebraic specification are sorts, function, operations, axioms • Atomic entity for Z-schema are various lines appearing in specification.
2. LENGTH 2.3 Predicting length • Predict length as early as possible • Length may predict by considering median expansion ratio from design length to code length on similar projects • Use size of design to predict size of resulting code . • design to design ratio is- size of design size of code
3. REUSE • Improves our productivity and quality. • Reuse done by modification in unit of code or reuse without modify • Reuse verbatim :- code in unit reused without any changes • Slightly modified:- fewer than 25% of LOC in unit modified • Extensively modified:- 25% or more LOC modified • New:- newly develop code Hewlett-Packward considers 3-level of code • New code • Reused code:- without any modification • Leveraged code:- modification of existing code
2.4 FUNCTIONALITY 2.4.1 Albrecht’s function point approach :- • Effort Estimation based on FP • It is based on function point(FP) • FP measure amount of functionality described by specification • UFC : we determine from some representation of the software the number of items of the following types: External i/p:- provided by user to describe distinct application –oriented data, doesn’t include inquiries (File name, menu selection) External o/p:- provided to user that generate application oriented data viz. Report External inquiries :- i/p requiring response External files:- machine readable interface to other system Internal files: logical master files in the system
2.4 FUNCTIONALITY UFC: Unadjusted Function Count TCF: Technical Complexity Factor
2.4 FUNCTIONALITY • LIMITATION OF ALBRECHT’s FP:- 1. Problem with subjectivity in technology factor:- • TCF may range from 0.65 to 1.35 ,UFC can be changed by +-35% 2. Problem with double counting:- • It is possible to account internal complexity twice - In weighting the i/p for UFP count & again in TCF 3. Problem with counter intuitive values:- • When each Fi is “average” & rated 3, we would expect TCF to be 1, instead formula yields 1.07 4. Problem with accuracy:- • TCF doesn’t significantly improve resource estimates • TCF doesn’t seem useful in increasing accuracy of prediction.
2.4 FUNCTIONALITY 5. Problem with early life cycle use - FP calculation require a full s/w specification, a user requirement document is not sufficient 6. Problem with changing requirements:- - No. & complexity of i/p, o/p, enquiries & other FP related data will be underestimated in a specification because they are not well understood early in a project 7. Problem with differentiating specified items:- - Calculation of FP from a specification can not be completely automated
2.4 FUNCTIONALITY 8. Problem with technology dependence:- • Counting rule for FP may require adjustment to the particular method being used 9. Problem with application domain • Use of FP in real-time, scientific application is controversial 10. Problem with subjectivity weighting • Choice of weights for calculating UFP was determined subjectively from IBM experience, these values may not be appropriate in other development environment 11. Problem with measurement theory:-
2.4 FUNCTIONALITY 2.4.2 COCOMO 2.0 :- • COCOMO is model for predicting effort from a formula whose independent variable is size. • Here object point is used for size measure in COCOMO 2.0. object points calculation involves counting the no. of screens, reports & 3rd generation Language components. • Then each object point is classified as simple, medium or difficult and finally complexity weight is assigned for each object type. • New object points = (object points ) X (100 – r )/ 100
2.4 FUNCTIONALITY 3. DeMarco’s approach • This approach proposed a functionality measure based on structured analysis & design notation • This approach involves bang metrics (specification weight metrics) • Bang metric involve 2-measure • Function strong system – based on the no. of functional primitives (lowest –level bubbles) in DFD. • Here basic functional primitive count is weighted according to the type of functional primitive & no. of data tokens used by functional primitive • Data strong system – based on the no. of entities in ER model. • Here basic entity count is weighted according to the type no. of relationship involving each entity
2.5 COMPLEXITY • 2-aspects of Complexity:- 1. Time Complexity:- resource is computer time 2. Space Complexity:- resource is computer memory • Problem Complexity – measures complexity of underlying (basic) problem • Algorithmic Complexity – measure efficiency of s/w • Structural Complexity – measures the structure of the s/w used to implement the algorithm. • Cognitive Complexity – measures the effort required to understand the s/w.
2.5 COMPLEXITY 2.5.1 Measuring algorithmic efficiency • Determine time & memory Ex:- binary search algo :-max. no. of comparison is internal attribute of algo. • Measuring efficiency:- • Time efficiency is measured by running program through particular compiler on particular machine with particular i/p, measuring actual processing time
2.5 COMPLEXITY • Big-O notation • Big O notation is the language we use for indicating how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem. • It measures the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size. • Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. • Big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows
2.5 COMPLEXITY Measuring Problem Complexity: • Define Big O notation • Software product can be modeled as an algorithm • This approach is also provides means of measuring the complexity of problem. • Particular problem requires f(n) computations.
2.6.1 Types of Structural Measure • Size – tell us about the effort for creating product. • Structure of the product plays a part, not only in required development effort but also in how the product is maintained. • Structure as having at least 3 parts: • Control Flow Structure • Data Flow Structure • Data Structure
2.7 CONTROL FLOW STRUCTURE • Control Flow Structure : Sequence in which instructions are executed. It also reflects iterative & looping nature of program • Data Flow Structure : It indicates behavior of data as it interacts with program. It also keeps track of data item as it is created or handled by a program. • Data Structure : Organization of the data itself, independent of program. Some times a program is complex due to a complex data structure rather than complex control or data flow.
2.7 CONTROL FLOW STRUCTURE • Usually modeled with directed graphs. • Each node – corresponds to program statement • Each arc (directed edge) – flow of control from one statement to another. • directed graphs – called control flow graphs or flow graphs.
2.7 CONTROL FLOW STRUCTURE 2.7.1 Flow graph model of structure :
2.7 CONTROL FLOW STRUCTURE 2.7.1 Flow graph model of structure : • In degree – no of arcs arriving at the node • Out degree – no of arcs that leave the node • Simple path – there are no repeated edges. • Start and stop – distinguish by encircling them. • Procedure node – out degree =1, • Predicated node – out degree > 1
2.7 CONTROL FLOW STRUCTURE 2.7.1.1 Sequencing and nesting a) Sequencing operation on flowgraph: • To build new flow graph from the old flow graph • Sequence of flow graph F1 & F2 is a flow graph formed by merging the stop node of F1 with the start node of F2 • F1; F2 - Seq(F1,F2), - P2(F1,F2)
b) Nesting operation on flowgraph: • D1 has a procedure node x. Then the nesting of D3 on to D1 at x is the flow graph formed by replacing the arc from x with the whole of D3. • Resulting graph written as : D1(D3 on x)
2.7 CONTROL FLOW STRUCTURE Sequencing & Nesting operation on flowgraph:
2.7 CONTROL FLOW STRUCTURE The generalized notion of structuredness • Program is structured it can composed using small allowable construct • Sequence, selection & iteration • Able to asset program & decide whether or not it is structure.
2.7 CONTROL FLOW STRUCTURE 2.7.2.2 Prime Decomposition : These are the flow graphs that cant be decomposed non trivially by sequencing or nesting. - Every flowgraph has a unique decomposition into hierarchy of primes.
2.7 CONTROL FLOW STRUCTURE 2.7.2 Hierarchical Measure : 2.7.2.1 McCabe’s cyclomatic complexity measure: • Proposed that program complexity be measured by the cyclomatic number of the programs flow graph. v(F) = e – n + 2 where f has e is the arcs, n is the nodes - cyclomatic number actually measure the number of linear independent paths through F. v(F) = 1 + d d - predicate nodes in F Thus, if v is measure of “complexity”, it follows that: • The complexity of primes is dependent on the no of predicates in them • The complexity of sequence is equal to the sum of the complexities of the components minus the number of components plus one. • The complexity of nesting components on a prime F is equal to the complexity of F plus the sum of the complexity of the components minus the number of components.
2.7 CONTROL FLOW STRUCTURE 2.7.2 Hierarchical Measure : 2.7.2.1 McCabe’s cyclomatic complexity measure: V (F)= e-n+2 OR V (F)= 1+d e= 11 & n=10 V (F)=11-10+2=3 OR V (F)=1+2=3
2.7 CONTROL FLOW STRUCTURE 2.7.2 Hierarchical Measure : 2.7.2.2McCabe’s essential complexity measure: • McCabe also proposed a measure to capture the overall level of structuredness in a program. • For a program with flowgraph F, the essential complexity ev is defined as Ev(F) = v(F) – m • where m is the number of sub flow graphs of F • M=1 and v(f)=5; • ev(F)=4
2.7 CONTROL FLOW STRUCTURE 2.7.3 Test Coverage Measures: • Structure of module is related to difficulty find in testing • Program ‘P’ produced for specification ‘S ‘, we test p with i/p I & check o/p for i satisfies specification or not • We define test cases in pair ( i, s(i)) Ex:- p program for following specification S of exam scores, rules are:- • For score under 45, program o/p “ fails” • For score between 45 & 80, program o/p “ Pass” • For score above 80, program o/p “ Pass with distinction ” • Any i/p other than numerical value produces error Test cases are:- (40,”fails”), (60,”pass”), (90,” Pass with distinction ”), (fifty,” Invalid i/p ”),
2.7.3 Test Coverage Measures: • Test strategies:- • black-box (closed box) testing:- test cases are derived from specification/requirement without reference to code or structure • White –box (open box) testing :- Test cases based on knowledge of internal program structure • Limitations of white-box testing • Infeasible path:- is a program path that can’t be executed for any i/p • It doesn't guarantee adequate s/w testing • Knowing set of path that satisfies strategies don’t tell how to create test –cases to match the path • Minimum no. of test cases • Important to know mini. No. of test cases needed to satisfy strategy • It help in plan of testing • For generating data for each test case • Understand time to be taken for testing
Modules of modularity & information flow 2.8.1 Model of modularity & information flow:- • Module:- is a contiguous sequence of program statement bounded by boundary elements having an aggregate identifier. • Module is a object that given level of abstraction • For inter modular attribute, we build modules to capture the necessary information about relationship • We need to know fine details of design. • Instead of variables we need to know modules call another modules (modules call graph)
For intra-modular attributes, we consider models that capture the relevant details about information flow inside a module.
2.8.2 Global modularity:- • It is difficult to define because of different view of what modularity means. • For example, we consider average module length as a measure of global modularity. It is done by examining mean length of all modules in system. • Module length is ratio scale. • M1= modules/procedures • M2=modules/variables • Both Hausen’s and Bohem’s observations suggests that we focus first on specific aspects of modularity and then construct more general models from them.
Morphology refers to overall “shape” of system structure when expressed pictorially. • Analyze what attribute good design. • Node represents a module & one modules calls the other, we connect two modules with edges. • It is used to view design components & structure of system. • Characteristics such as “width” & “depth”. 2.8.3 Morphology:-
It is used to view design components & structure of system • Morphological characteristics include:- • Size: Measures as number of nodes, no. of edges & combination of these. • Depth: measure as the length of longest path from the root node to a leaf node • Width: measures as the maximum numbers of node at any one level. • Edge-to-node ratio: connecting density measures 2.8.3 Morphology:-
2.8.4 Tree Impurity:- • It tells us how far a given graph deviates from being a tree. Tree impurity of G increases as difference between G & G’ increases.
Tree Impurity • Graph is said to be connected if there is a path between each pair of nodes in the graph. • Complete Graph (Kn): It is the graph in which every two nodes are connected directly by single edge. For example G4, G5 & G6 graph • Tree:- The graph G1 in fig. is said to be a tree because it has no cycle • For every connected graph G, e can find at one subgraph that is a tree built on same nodes as G, such a tree is called a Spanning Subtree. • Spanning Graph(G’):- A spanning subgraph G’ of graph of G that is build on same nodes of G, but with minimum subset of edges so that any two nodes of G’ are connect by a path.
Formal Equation for the Tree Impurity measure is m(G2)=1/10, m(G3)=1/5 & m(G4)=1