190 likes | 289 Views
Identification of Distributed Features in SOA. Anis Yousefi, PhD Candidate Department of Computing and Software McMaster University July 30, 2010. Feature Identification. Feature being used. List of classes. User. Software Engineer. Legacy Software Engineering!.
E N D
Identification of Distributed Features in SOA Anis Yousefi, PhD Candidate Department of Computing and Software McMaster University July 30, 2010
Feature Identification Feature being used List of classes User Software Engineer
Feature Identification Literature • Identifying the source code constructs activated when exercising a feature • Feature: functionality offered by a system • Techniques • Textual analysis • Static analysis • Dynamic analysis • Hybrid approaches
Trace-based Dynamic Analysis • Instrumentation • TPTP • Scenario execution • Trace analysis • Pattern mining
Challenges • Trace collection • Scattered implementation of features • Concurrency of events • Feature location • Non-deterministic behavior of features
Summary • What? • Identify the code associated with distributed features in SOA • Identify dynamic feature behavior • How? • Trace-based dynamic analysis of services in SOA • Pattern mining to identify feature behavior • Challenges? • Scattered implementation of features, concurrency of events, non-deterministic behavior of features
Steps of The Proposed Approach • Run feature-oriented scenarios in SOA • Collect and merge execution traces of services • Mine traces to extract patterns • Analyze the patterns to identify feature-specific code and behavior
The Proposed Framework 2 1 4 3
Step 1- Running Scenarios ` ` ` ` ` ` 10
m1 m2 m3 m4 … Step 2- Merging Distributed Traces • Trace Structure • Enter m1, Timestamp: 0 • Enter m2, Timestamp: 1 • Leave m2, Timestamp: 3 • Enter m3, Timestamp: 4 • Enter m4, Timestamp: 5 • Leave m4, Timestamp: 6 • Leave m3, Timestamp: 7 • … 11
Step 2- Merging(Contd.) • Problems • Distributed data • Interweaved data • Solution • Building “block execution tree” • Resolving uncertainties • Before-after analysis • Textual analysis • Frequency analysis • Merging the traces
m1 m0 m2 m3 Step 3- Mining Frequent Patterns • Traces represent “call graphs” • Mining frequent sub-graphs m2 m5 m3 m6 m4 m1 m7 m0 m5 m6 m4 m7 m6 Trace i Trace j m1, {i,j} m5, {i,j} m6, {i,j} m7, {i,j} X, {i,j} m4, {i,j} Y, {i,j} Z{i} m6, {j} W,{j} m1, {i} m5, {i} m6, {i} m7, {i} X, {i} m4, {i} Y, {i} Z, {i} m0 m0 Z W m1 m1 m2 m2 m6 Y Y m3 m3 m4 m4 Patterns: m1, Y X X m5 m5 m7 m7 m6 m6
Step 4- Analyzing Patterns • Distinguishing “feature-specific patterns” from “omnipresent patterns” and “noise patterns”
Metrics: Feature Distribution Sn Sk Sm FD (f) = 1 - 58.3% = 41.7% f 100% 25% mi 50% f: feature p: pattern FD(f,p): distribution of feature f over SOA with regard to pattern p Sp: services contributing in pattern p Ms: methods defined in services s Mp: methods contributing in pattern p FD(f): distribution of feature f (all patterns) Sf: services contributing in the execution of feature f Mf: methods contributing in the execution of feature f Pf: patterns for feature f , , 15
Metrics: Call Frequency Sn Sk Sm 2 10 CF (f) = 12 f OPi mi f: feature p: pattern CF (f,p): call frequency of feature f with regard to pattern p Sp: services contributing in pattern p OPs: interface operations defined in services s CF (f): call frequency of feature f (all patterns) Pf: patterns for feature f
Metrics: Accuracy Acc(f): accuracy of service regarding feature f Pf: patterns for feature f Cf: cases defined on the scenarios =1 works as we expected >1 considers additional cases <1 treats some cases equally
Future Work • Abstract feature behavior • Normal vs. alternative behavior • Augment published service description • Improve/Define metrics 18
Thank You! Anis Yousefi yousea2@mcmaster.ca