250 likes | 384 Views
Alexander Serebrenik, Serguei Roubtsov , and Mark van den Brand. D n -based Design Quality Comparison of Industrial Java Applications. What makes a good software architecture?. Among other things… easy to build additions and make changes. Why?.
E N D
Alexander Serebrenik, Serguei Roubtsov, and Mark van den Brand Dn-based Design Quality Comparison of Industrial Java Applications
What makes a good software architecture? Among other things… easy to build additions and make changes Why? Maintenance typically accounts for 75% or more of the total software workload Software evolves: not flexible enough architecture causes early system’s decay when significant changes become costly
Goals of good architectural design • Flexible design: • more abstract classes • and • less dependencies between packages Make software easier to change when we want to minimize changes which we have to make
Stable packages Do not depend upon classes outside Many dependents Should be extensible via inheritance (abstract) Instable packages Depend upon many classes outside No dependents Should not be extensible via inheritance (concrete) Abstractness/stability balance Stability is related to the amount of work required to make a change [Martin, 2000].
What does balance mean? A good real-life package must be instable enough in order to be easily modified It must be generic enough to be adaptable to evolving requirements, either without or with only minimal modifications Hence: contradictory criteria
How to measure Instability? Ca – afferent coupling measures incoming dependencies Ce – efferent coupling measures outgoing dependencies Instability = Ce/(Ce+Ca)
Dn – Distance from the main sequence Abstractness = #AbstrClasses/#Classes 1 zone of uselessness main sequence Instability = Ce/(Ce+Ca) zone of pain 0 1 Dn = | Abstractness + Instability – 1 | [R.Martin 1994]
Instability: What does “depend” mean? Still: Entire architecture? Instability = Ce/(Ce+Ca) Ce: 4 3 1 [Martin 1994] [JDepend] [Martin 2000]
Averages Industrial practice Benchmarking for Java OSS Distributions Expectation for threshold exceeding values 2 Flavors of Architecture Assessment
Benchmarks? • 21 Java OSS • Different domains • EJB frameworks, entertainment, web-app development tools, machine learning, code analysis, … • Different ages (2001 - 2008) • Size: ≥ 30 original packages • Development status: focus on Stable/Mature • Also include alpha, beta and inactive
Average Dn 1.00 But… average is not all! Exceeds + 4 Dresden OCL Toolkit 0.32 0.25 [-2; +2] Benchmarks 0.15 0.00
How are the Dn-values distributed? Exponential distribution?
Exponential distribution? • Exponential distribution: • Support [0;1] rather than [0;): Hence, we normalize: • And use max-likelihood fitting to find
Benchmarking • Higher • Sharper peaks • Thinner tails • Smaller averages Why is interesting? increases Dn PAGE 14
Estimate excessively high values! • How many packages exceed threshold z? P(Dn ≥z) -3 z +3 PAGE 15
Dn ≥ 0.6 • Dresden OCL Toolkit: 23.7% packages P(Dn ≥0.6) -3 +3
Dresden OCL Toolkit: Why? • Started in 1998. • BUT: • We are looking at the Eclipse version! • Version 1.0 – June 2008 • Version 1.1 – December 2008 • Has yet to mature…
Can we compare proprietary systems using Dn? Case study: • System A and System B support loan and lease approval business processes • Both systems employ three-tier enterprise architecture: • System A uses the IBM Websphere application server • System B uses a custom made business logic layer implemented on the Tomcat web server • System A: 249 non-third-party packages • System B: 284 non-third-party packages
Average Dn 1.00 Exceeds + 4 System B 0.337 Benchmarks System A 0.186 0.00
What about distributions? % of packages beyond threshold an average OSS System A System B Dn threshold value
Independent assessments The dependencies between packages must not form cycles [Martin, 2000] B A C • JDepend reports # of cyclic dependencies: System A - 1 System B - 23 • Cyclic dependencies between packages A, B, C should be released and maintained together
Layering System B System A Upcoming dependencies
Chidamber and Kemerer OO metrics * The lower the better System A (white bars) has more (%) low-WMC packages than System B (blue bars). The same holds for LCOM.
Conclusions • Java OSS benchmarks for average Dn • g(x) – statistical model • Expectation for threshold exceeding values • Applicable to other metrics as well! • practical feasibility of Dn-based assessment of industrial applications