350 likes | 369 Views
This initiative focuses on the application of UML and performance-based risk assessment to identify and rank performance critical components in software systems, reducing risk and ensuring better performance. The methodology involves automated techniques for V&V, fault-injection based analysis, and reliability-based risk assessment.
E N D
FY 2003 Initiative: IV&V of UML Less risk, sooner WVU UI: Performance-Based Risk Assessment Hany Ammar, Katerina Goseva-Popstojanova,V. Cortelessa, Ajith Guedem, Kalaivani Appukutty, Walid AbdelMoez, Ahmad Hassan and Rania Elnaggar LANE Department of Computer Science and Electrical EngineeringWest Virginia University Ali Mili College of Computing ScienceNew Jersey Institute of Technology
Outline • Objectives • What can we do • Why UML • UML & NASA • Project Overview • Performance Based Risk • Accomplishments • Future Work • Publications
Objectives • Automated techniques V&V of dynamic specifications • Performance and timing analysis • Fault-injection based analysis, • Reliability-based and Performance-based risk assessment • Technologies: • UML • Architectures • Risk assessment methodology • Benefits: • Find & rank critical • use cases, scenarios, • components, connectors What keeps satellites working 24/7 ? The ARIANE 5explosion
What we can do? • Estimate performance based risk on a scenario level • Identify and rank performance critical components • How ?- details follow
Why UML ? • Unified modeling language • Rational software • The three amigos: Booch Rumbaugh, Jacobson. • International standard in system specification An international standard In system specification
UML & NASA • Increasing use at NASA • Informal (very) survey • Google search: • “rational rose nasa” • 10,000 hits • 3 definite projects, just in first ten • We use a case-study based on the UML specs of the Earth Observing System
The Case Study The methodology is illustrated on the Flight Operations System (FOS) of NASA's Earth Observing System (EOS) • NASA's Earth Observing System (EOS) is the first observing system to offer integrated measurements of the Earth's processes • The Flight Operations Segment (FOS) of EOS is responsible for the planning, scheduling, commanding, and monitoring of the spacecraft and the instruments on board • We have evaluated the performance-based risk of the Commanding service
Project Overview FY01 • Developed of an automated simulation environment for UML dynamic specification, suggested an observer component to detect errors • Conducted performance and timing analysis of the NASA case study FY02 • Develop a fault injection methodology Define a fault-model for components at the specification level • Develop a methodology for architectural level risk analysis Determine Critical Use Case List Determine Critical Component/Connector list FY03 • Develop a methodology for Performance-based / Reliability-based risk assessment • Validation of the risk analysis methodology
Performance Based Risk • Performance is a non-functional software attribute that plays a crucial role in application domains spreading from safety-critical systems to e-commerce web sites • We introduce the concept of performance-based risk, which is a risk resulting from software failures originated from behaviors that do not meet performance requirements • Performance failure is the inability of the system to meet its performance objective(s) • Performance-based risk is defined as: Probability of performance failure Severity of the failure
What do we need and what do we get? • Input • UML diagrams: Use case diagram, Sequence diagram, and Deployment diagram; • Performance objectives (requirements) • Output • Performance-based risk factor of the scenarios modeled as sequence diagrams • Identification of performance-critical components in the scenario
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
STEP 1 – Assign a Demand Vector to Every “Action” in SD Build a software execution model from the demand vectors and SD
The Preplanned Emergency scenario • Preplanned emergency scenario comprises of two sequence diagrams: • Preparation of command groups that are to be uplinked (SD1) • Handling the transmission failure during uplink (SD2) • We assumed for the purpose of illustration that SD1 is executed once and SD2 i.e. the retransmission twice before there is a mission failure
EOC1 ICC1 R1 IDB1 Transmit Preplanned Command EOC5 ICC2 SE1 2 EOC2 EOC6 ICC3 Retransmit On Failure TC1 EOC3 EOC7 T1 T1 T2 EOC4 The Software Execution graph (Step 1) SD1 SD2
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
Space Craft ECOM << network - (25000 μs/KB) >> Communication Subsystem <<CPU (0.02 μs/KB)>> Ground N/W << network - (80 μs/KB) >> EOC <<CPU (0.0025 μs/KB)>> ICC <<CPU (0.0025 μs/KB)>> IDB <<database (60 μs/KB)>> STEP 2 – Add Hardware Platform Characteristics on the Deployment Diagram
Conduct Stand-alone Analysis (Step 2) • Stand-alone analysis consists of evaluating the completion time of the whole SD as it would be executed on a dedicate hardware platform with a single user workload • The service time consumed by the steps (as shown in the software execution graph) is 9.949 seconds
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
N*D Z1 N*DMAX Z2 Z3 Response time Objective UB D LB Asymptotic bounds and Failure probability estimate (Step 3) • Failure probability (Z1) = 0 • Failure probability (Z2) = =0.7958 • Failure probability (Z3) = 1
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
STEP 4 – Conduct Severity Analysis • For severity analysis we use Functional Failure Analysis (FFA) based on UML use case and scenario diagram • The input to FFA are • A list of events of a use case (under a specific scenario) • A list of guide words • The output is the severity level (catastrophic, critical, marginal, and minor) based on FFA in a tabulated form
FFT for the Emergency Scenario in EOS-FOS (Step 4) Since we are dealing with performance-based risk, we apply only the guideword “LATE”
Performance Based Risk Methodology For each Use Case For each scenario STEP 1 – Assign demand vector to each “action” in Sequence diagram; build a Software Execution Model STEP 2 – Add hardware platform characteristics on the Deployment diagram; conduct stand-alone analysis STEP 3 – Devise the workload parameters; build a System Execution Model; conduct contention-based analysis and estimate probability of failure as a violation of a performance objective STEP 4 – Conductseverity analysis and estimate severity of performance failure for the scenario STEP 5 – Estimate the performance risk of the scenario; Identify high-risk components
STEP 5 – Estimate the Performance Risk • The performance risk of a scenario is defined as a product of • the probability that the system will fail to meet the required performance objective for a given workload (e.g., desired response time) estimated in STEP 3 and, • the severity associated with this performance failure of the system in this scenario estimated in STEP 4 • Performance based risk = Probability of performance failure * Severity of the failure = 0.7958*0.95=0.756
Identify High-risk Components(Step 5) • Estimate the overall residence time of each component in a given sequence diagram • Sum the time of all processing steps that belong to that component in a given scenario • Normalize it with the response time of the sequence diagram • Components that contribute significantly to the scenario’s response time are the high-risk components
Identify High-risk Components (Step 5) • Ground (GN) and the Space(ECOM) networks are the most critical components • The service times of the other components are significantly smaller than the service times of GN and ECOM network components and hence are not visible on the graph
Identify High-risk Components (Step 5) • This 3-D graph shows the components on x-axis, scenarios on y-axis and normalized service times on the z-axis • The graph is based on different case study and is presented here for illustration
Accomplishments • Developed analytical techniques and a methodology for reliability-based risk analysis • A lightweight approach based on static analysis of dynamic specifications is developed and automated • A tool was presented at ICSE Tools session • Applied the methodology and tool to the NASA case study HCS-ISS • Developed analytical techniques and a methodology for performance-based risk analysis • Applied the methodology to the NASA-EOS case study
Publications • H. H. Ammar, T. Nikzadeh, and J. B. Dugan "Risk Assessment of Software Systems Specifications," IEEE Transactions on Reliability, To Appear September 2001 • Hany H. Ammar, Sherif M. Yacoub, Alaa Ibrahim, “A Fault Model for Fault Injection Analysis of Dynamic UML Specifications,” International Symposium on software Reliability Engineering, IEEE Computer Society, November 2001 • Rania M. Elnaggar, Vittorio Cortellessa, Hany Ammar, “A UML-based Architectural Model for Timing and Performance Analyses of GSM Radio Subsystem” , 5th World Multi-Conference on Systems, Cybernetics and Informatics, July. 2001, Received Best Paper Award • Ahmed Hassan, Walid M. Abdelmoez, Rania M. Elnaggar, Hany H. Ammar, “An Approach to Measure the Quality of Software Designs from UML Specifications,” 5th World Multi-Conference on Systems, Cybernetics and Informatics and the 7th international conference on information systems, analysis and synthesis ISAS July. 2001. • Hany H. Ammar, Vittorio Cortellessa, Alaa Ibrahim “Modeling Resources in a UML-based Simulative Environment”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA'2001), Beirut, Lebanon, 26-29 June 2001 • A Ibrahim, Sherif M. Yacoub, Hany H. Ammar, “Architectural-Level Risk Analysis for UML Dynamic Specifications,” Proceedings of the 9th International Conference on Software Quality Management (SQM2001), Loughborough University, England, April 18-20, 2001, pp. 179-190 URL is http://www.csee.wvu.edu/~ammar/papers/2001
Publications • T. Wang, A. Hassan, A. Guedem, W. Abdelmoez, K. Goseva-Popstojanova, H. Ammar, “Architectural Level Risk Assessment Tool Based on UML Specifications”, 25th International Conference on Software Engineering, Portland, Oregon, May 3 - 10, 2003. • A. Hassan, K. Goseva-Popstojanova, H. Ammar, “Methodology for Architecture Level Hazard Analysis”, ACS/IEEE International Conference on Computer Systems and Applications (AICCSA 03), Tunis, Tunisia, July 14-18, 2003. • A. Hassan, W. Abdelmoez , A.Guedem, K. Apputkutty, K.Goseva-Popstojanova, H.Ammar, “Severity Analysis at Architectural Level Based on UML Diagrams”, 21st International System Safety Conference, Ottawa, Ontario, Canada, August 4-8, 2003. • K. Goseva-Popstojanova , A. Hassan, A. Guedem, W. Abdelmoez, D. Nassar, H. Ammar, A. Mili, “Architectural-Level Risk Analysis using UML”, IEEE Transaction on Software Engineering, (accepted for publication). URL is http://www.csee.wvu.edu/~ammar/papers/2001