300 likes | 426 Views
E-mail: lsykalski@smu.edu. DISCOM Quality Attribute Impact A Case Study in OO-Design. Lewis Sykalski. Background. DISCOM is a tool to record, playback, & print-out Distributed Interactive Simulation PDUs -- a UDP datagram broadcasted on the network
E N D
E-mail: lsykalski@smu.edu DISCOMQuality Attribute ImpactA Case Study in OO-Design Lewis Sykalski
Background • DISCOM is a tool to record, playback, & print-out Distributed Interactive Simulation PDUs -- a UDP datagram broadcasted on the network • DISCOM was rewritten from a structured approach to an Object-Oriented approach a year ago. • DISCOM presents a unique opportunity to analyze quality attributes from both a structured approach as well as an object-oriented approach
Quality Attributes to Analyze • Performance/Efficiency • Maintainability • Reliability • Testability • Reusability
Performance/Efficiency Definition: • The response time, utilization, and throughput behavior of the system. As well as fulfilling its purpose without waste of resources Hypothesis: • Structural Approach is marginally faster. Measurement Approach: • VTune Performance Analyze both copies with a log file. • Analyze cost of dynamic allocation • Observe memory usage
VTune Results 1 minute playback file consisting of ~10000 packets/59000 PDUs Table 1. Performance Metrics
With more effort… The cost of dynamic allocation, C, is given by: The cost of context switching, C, is given by: where N is the number of calls, depth is the number of contained objects that must be created when the class is created, tw and tu are the winding time and the unwinding time to push to the call stack and pop from the call stack respectively. where N is the number of calls, depth is the number of contained objects that must be created when the class is created, sc and sd are the service time to create the object and to destroy the object respectively.
Efficiency Metrics 1 minute playback file consisting of ~10000 packets/59000 PDUs Extended = 1 minute-playback looped 10x * May be indicative of memory leak Table 2. Efficiency Metrics
Performance/Efficiency Conclusion • Performance: • Close to 2X slow-down in using OO-code • Efficiency: • OO requires more memory. • Probably even more so in a pure-OO language (like Java) where everything must be instantiated and no concept of pointers • More prone to hazards (excessive dynamic allocation of objects)
Maintainability/Modifiability Definition: • The extent to which software facilitates updates to satisfy new requirements Hypothesis: • Object-Oriented approach is more easily maintainable Measurement Approach: • Track resolution time for Discrepency Reports • Track change time for Change Requests • Track prevalance of changes from affected file-listings • Measure cyclomatic complexity
Cyclomatic Complexity • Cyclomatic Complexity directly measures the number of linearly independent paths through a program's source code. • Equivalent to CFG Paths (ifs, loops, etc) • WMC is OO-metric that rolls-up CCs in class • Use a Tool Called CodeAnalyzerPro to measure CC, WMC, & SLOC
Paradigm Max CC MAX WMC SLOC UI Procedural 92 N/A 8643 OO 14 93 5235 DIS Procedural 147 N/A 42423 OO 17 185 23563 Total Procedural 147 N/A 51066 OO 17 185 28798 Modifiability Metrics Table 4. Modifiability Metrics
Modifiability Metrics (Cont.) Table 4. Discrepency Metrics Table 4. Change Metrics
Modifiability Conclusion • OO paradigm appears to be more understandable by way of lower method Cyclomatic Complexity and SLOC count • Change Request / Discrepency Report tracking information seems to support this conclusion
Reliability Definition: • It can be expected to perform its intended functions satisfactorily Hypothesis: • Reliability growth is steeper with OO-Design methodology Measurement Approach: • Track Discrepancy Reports • Reliability Modeling of event simulation log files using CASRE to generate reliability profiles
Reliability Growth Modeling Setbacks • Reliability is generally very high with many hidden problems that are not recorded. • Furthermore, I could not find meaningful unclassified data-set for procedural paradigm and while I could have gone ahead with evaluation of OO-Reliability growth, there would have been nothing to compare it to.
Reliability Metrics Table 3. Reliability Metrics Table 4. Reliability Growth Metrics
Reliability Conclusion • OO exhibited higher reliability growth after integration deployment • Isn’t conclusive proof to support a clear determination on the Reliability attribute • Exception handling may help with discrepency resolution as well as severity (try/catch)
Testability Definition: • The extent that software facilitates the establishment of acceptance criteria and supports evaluation of its performance Hypothesis: • From a class-level unit-testing standpoint OO design prevail. From a system level or for debugging issues a structural design. Measurement Approach: • Measure cyclomatic complexity • Perform unit-testing and document qualitatively the ease
Paradigm Max CC MAX WMC SLOC UI Procedural 92 N/A 8643 OO 14 93 5235 DIS Procedural 147 N/A 42423 OO 17 185 23563 Total Procedural 147 N/A 51066 OO 17 185 28798 Cyclomatic Complexity Revisited
Testability Conclusion • Object-Oriented Paradigm is better suited for testability. • More logical coherence • Less paths/more functions for easier unit-test hook insertion • Less Cyclomatic Complexity within functions • Debugging ease similar across paradigms.
Reusability Definition: • The likelihood a segment of source code can be used again to add new functionalities with slight or no modification Hypothesis: • OO approach is more reusable Measurement Approach: • Observe opportunities available and opportunities taken for reuse at both the object level and and the system level
Reusability Approach • The first step at evaluation was to survey both reuse opportunities available and reuse opportunities taken in the labs. • Next, reusability was gauged by qualitatively mashing a simplistic Create Entity PDU into both an application with an existing DIS interface as well as one without. This was done utilizing the code from both the procedural application as well as the object-oriented application. • Measurement was qualitative primarily from a time perspective as SLOC modified is not necessarily indicative of the ease of reuse.
Reusability Metrics Table 5. Reusability Metrics
Reusability Conclusion Existing Applications: For applications with existing interfaces reusability is moderately smoother if one begins with code from the procedural paradigm. This is due primarily, in my opinion, to ease of understandability. With Object-Oriented Code, one has to understand 2 designs, both the source and the target, in order to adapt it to fit the needs. New Applications: For applications without existing interfaces reusability is smoother if one begins with the Object-Oriented Paradigm due to the enhanced modularity and portability of the approach.
References • Software Performance AntiPatterns Connie U. Smith, Lloyd G. Williams • Indicators of Structural Stability of Object-Oriented Designs: A Case Study Mahmoud O. Elish and David Rine • Has Object-Oriented Programming Delivered? Greg Goth • A Controlled Experiment in Maintenance Comparing Design Patterns to Simpler Solutions Lutz Prechelt, Walter F. Tichy,