340 likes | 515 Views
Software Performance Evaluation. Ioan Jurca (“Politehnica” University of Timisoara - Romania) Cosmina Chi şe (“Politehnica” University of Timisoara – Romania). Contents. Introduction OMG Standards for MDA UML Performance Related Profiles (SPT, QoS & FT, MARTE) Analytical Approaches
E N D
Software Performance Evaluation Ioan Jurca (“Politehnica” University of Timisoara - Romania) Cosmina Chişe (“Politehnica” University of Timisoara – Romania)
Contents • Introduction • OMG Standards for MDA • UML Performance Related Profiles (SPT, QoS & FT, MARTE) • Analytical Approaches • Simulation Approaches • References
Introduction (1) • Software performance should be evaluated as early as possible in the development cycle, especially for distributed, mobile applications • Experience has shown that most performance problems are due to design decisions • It is desirable to assess performance even during system/software requirements definition • Performance evaluation assumes obtaining performance parameters: response time, throughput, resource utilization
Introduction (2) • Complete information is not available during early stages of development, models should be used • A Model Driven Development (MDD) approach is recommended for software development • System requirements, high-level architecture and design are expressed using UML diagrams • Performance is a ‘non-functional’ requirement • UML diagrams are annotated with performance information and/or requirements
Introduction (3) • Several standards and extensions have been defined by OMG (Object Management Group) in order to provide a uniform platform to design both general-purpose and specific applications • Research in software performance evaluation has recently concentrated on automatic generation of performance models from software artifacts • Successful methodologies have been created to transform annotated software models into performance models: queuing networks, Petri nets, simulation models
OMG Standards for MDA (1) • The MDD approach means building system high-level models at first and then refining them until the implementation is finalized. • OMG addresses this notion with MDA (Model-Driven Architecture). MDA separates application logic from its underlying implementation technology; models are treated as primary artifacts of development. • Platform-independent models (PIM)defined by means of UML and other OMG standards can be translated to platform-specific models (PSM)implemented in various technologies.
OMG Standards for MDA (2) • Meta Object Facility (MOF) provides the platform-independent metadata management foundation for MDA. It specifies an abstract syntax to define and integrate a family of metamodels using simple UML class modeling concepts. • The key modeling concepts are Classifier and Instance (Class and Object), and the ability to navigate from an instance to its metaobject (its classifier). • Essential MOF (EMOF) is the subset of MOF that closely corresponds to the facilities found in object-oriented programming languages and XML. The value of EMOF is that it provides a straightforward framework for mapping MOF models to implementations
OMG Standards for MDA (4) • The classical framework for metamodeling in MOF 1.4 is based on an architecture with four metalayers : • The information layer is comprised of the data that the user wishes to describe. (User Object layer) • The model layer is comprised of the metadata that describes data in the information layer. Metadata is informally aggregated as models. (User Model layer) • The metamodel layer is comprised of the descriptions (i.e., meta-metadata) that define the structure and semantics of metadata. Meta-metadata is informally aggregated as metamodels. A metamodel is an “abstract language” for describing different kinds of data. (UML layer) • The meta-metamodel layer is comprised of the description of the structure and semantics of meta-metadata. In other words, it is the “abstract language” for defining different kinds of metadata. (MOF layer)
UML Profile for Schedulability, Performance, and Time – SPT (1)
UML Profile for Schedulability, Performance, and Time – SPT (2) • A performance context specifies scenarios, dynamic situations of system behavior under different workloads, involving resources • A scenario is an ordered sequence of steps, which can also be subject to parallel execution (forks and joins) • Each step executes on a host (processing resource) and may use passive resources • A workload specifies the intensity of demand for a certain scenario as well as the required or estimated response time • The SPT profile offers a set of UML extensions for performance analysis – they all have the prefix “PA”
UML Profile for Schedulability, Performance, and Time – SPT (3) • Stereotypes have been defined for items that appear explicitly in UML models: PAcontext, PAclosedLoad, PAopenLoad, PAstep, PAhost (processing resource), PAresource (passive resource) • Each stereotype has associated tagged values that will provide specific performance information, and also include analysis results, if the analysis method provides feedback mechanisms
UML Profiles for Quality of Service and for Fault Tolerance (QoS&FT) • The QoS Modeling Framework provides support for high performance systems (real-time, fault tolerant), which need to guarantee that certain requirements are met • These systems have non-functional properties (NFPs) and it is also difficult to predict such properties • Such properties need to be evaluated and feedback at system architecture level must be provided. • The FT Modeling Framework is concerned with risk assessment and management; the description of fault-tolerant systems, based on replication, is supported by this Profile. • By using QoS&FT, system models can be annotated with QoS Contraints, based on QoS Catalogs, which are instantiated for each application from pre-defined templates
UML MARTE(Modeling and Analysis of Real Time and Embedded Systems) Profile (1) • MARTE comes as a replacement for the UML Profile for SPT • SPT provides a grammar for powerful concepts (symbolic variables and time expressions), but does not support user-defined NFPs and specialized domains • The MARTE NFP modeling framework reuses structural concepts in QoS&FT, but reduces the usage complexity; also, it introduces VSL (Value Specification Language) which extends and formalizes concepts from TVL (Tag Value Language, defined in SPT). • This Profile provides support for modeling systems from specification to detailed design and also concerns model-based analysis
UML MARTE Profile (2) • Benefits of using this profile: • Providing a common way of modeling both hardware and software aspects of a RTES in order to improve communication between developers. • Enabling interoperability between tools used for different stages of development • Fostering the construction of models that may be used to make quantitative predictions for real-time and embedded features of systems • There are two resource modeling packages: • Generic Resource Modeling (GRM) - offers the concepts necessary to model a general platform for executing real-time embedded applications • Detailed Resource Modeling (DRM) - specific modeling artifacts to describe both software and hardware execution supports; it specializes generic concepts offered by GRM
UML MARTE Profile (4) • The basis for all analysis packages is the Generic Quantitative Analysis Modeling (GQAM) package that defines basic modeling concepts and NFPs
Analytical Approaches (1) • Analytical performance models include Queueing Networks (QN), and their extensions called Extended Queueing Networks (EQN) and Layered Queueing Networks (LQN), Stochastic Timed Petri Nets (STPN), Stochastic Process Algebras (SPA) • The first approach to integrating performance analysis into the software development process was described by Smith, who set the basis for SPE. In her opinion, system descriptions should consist of two models: software execution model and system execution model. • The software execution model shows system behavior and uses execution graphs, EGs, to represent workload scenarios • The system execution model describes system structure, represented as queueing networks • A tool has also been developed, SPE·ED, which includes simulation of the QN model.
Analytical Approaches (2) • A first attempt to use SPT notations shows how they can be inserted into Use Case and Statechart diagrams • The role of Activity diagrams in order to refine activities in statechart was also explored • Since several different performance models can be derived from the same specification, the idea of having a common representation occurred. Then, this common intermediate representation could be transformed into any kind of analytical model. • An important intermediate model is the Core Scenario Model (CSM); it extracts performance information stored in UML diagrams and SPT Profile data. The desired performance model is obtained by two transformations: UML model to CSM (U2C) and CSM to Performance model (C2P)
Analytical Approaches (4) • Scenario flows are described as ordered sequences of steps and PathConnection objects which connect each pair of step • Two consecutive steps are connected by a Sequence object, or by Branch and Merge objects, or by Fork and Join objects. • Each step is executed by an active resource; steps can use passive resources • Several step types are emphasized, such as Start, End, ResourceAcquire and ResourceRelease; the first step of a scenario (Start) may be associated to a workload. This kind of subtyping improves model checking and performance model generation
Analytical Approaches (5) • Each class in the meta-model has attributes which correspond to tagged values in the UML SPT Profile. • The first phase, U2C, consists of converting Deployment and Activity diagrams into CSM: input consists of XMI files representing the UML model; the CSM internal representation is a DOM (Domain Object Model) tree that can be exported in XML format. • The second phase – transformation of CSM to a performance model is done with an algorithm which converts CSM into a LQN model . This algorithm has been implemented in the UCM2LQN tool.
Analytical Approaches (6) The next step towards flexibility in deriving performance modelis building frameworks. They allow integration of various UML tools and notations Performance by Unified Model Analysis (PUMA) is based on CSM and allows integration of various tools to facilitate CSM extraction from the design model ( XMIs of UML diagrams or UCMs) and also to translate CSM into performance models (LQN, Petri nets, QN).
Simulation Approaches (1) • Simulation models mean converting the system design into executable form and obtaining performance results by running the simulation • The drawbacks: • the simulation may be time-consuming until the results converge • the modeler may create a rather complex simulation model • Advantage: any kind of system can be simulated, while not any kind of system may be modeled analytically • Most simulators are based on Discrete Event Simulation (DES), meaning that the system state changes are caused by events that occur at discrete moments in time. There are two approaches: event-oriented and process-oriented
Simulation Approaches (2) • A popular network simulator is OPNET used to generate simulation models from extended UML diagrams (to express temporal requirements and resource usage, the target being hard real-time systems ) • Alternative system performance evaluation techniques:benchmarking, simulation, prototyping and load testing: • Benchmarking:measuring how many small standardized activities a given system can execute per second;useful for evaluating hardware • Simulation implies building an abstract model of the infrastructure (hardware and middleware), the behavior of the business logic as well as the expected load from internal sources and users. • Prototyping consists of developing small modules of the system ( rarely used) • Load-testing simulates expected user behavior and can be used for acceptance testing or during development to test performance aspects of individual modules or prototypes
Simulation Approaches (3) Automated performance prototyping, using UML diagrams as input is possible by following the process depicted in the figure
Simulation Approaches (4) • Another tool is UML-PSI (UML Performance Simulator) • This tool input consists in XMI files exported by ArgoUML (or the commercial version Poseidon), a visual tool for defining UML diagrams. The diagrams are annotated with performance information according to UML SPT Profile • UML-Ψ extracts relevant information from the XMI input file (Use Case, Deployment and Activity diagrams) and generates a performance process-oriented simulation model of the system. • The simulation model is based on three main types of entities, corresponding to the actions of activity diagrams, the resources of the software system and the workloads. • The simulation model is executed by using information from a configuration file. The configuration file can be an arbitrary fragment of Perl code, which usually defines simulation parameters, such as simulation duration and desired accuracy of results, and also provides values for unbounded variables in the UML model
References • [1] S. Balsamo, A. Di Marco, P. Inverardi, and M. Simeoni, “Model-based performance prediction in software development: a survey”, IEEE Trans. on Software Engineering, vol .30, n. 5, pp. 295-310, 2004 • [2] C.U. Smith, Performance Engineering of Software Systems, Addison-Wesley, 1990 • [3] C.M. Woodside, D.C. Petriu, D.B. Petriu, H. Shen, T. Israr and J. Merseguer, “Performance by Unified Model Analysis (PUMA)”, in Proc. of the 5th ACM Workshop on Software and Performance, Palma, Spain, July 2005 • [4] M. Marzolla and S. Balsamo, “UML-PSI: The UML Performance Simulator”, in Proc. of the 1st Int. Conf. on Quantitative Evaluation of Systems (QEST 2004), pp. 340-341, Enschede, The Netherlands, September 2004 • [5] The Object Management Group (http://www.omg.org)