650 likes | 1.09k Views
Network Performance Analysis Strategies . Dr Shamala Subramaniam Dept. Communication Technology & Networks Faculty of Computer Science & IT, UPM e-mail : shamala@fsktm.upm.edu.my . Overview of Performance Evaluation. Intro & Objective The Art of Performance Evaluation
E N D
Network Performance Analysis Strategies Dr Shamala Subramaniam Dept. Communication Technology & Networks Faculty of Computer Science & IT, UPM e-mail : shamala@fsktm.upm.edu.my
Overview of Performance Evaluation • Intro & Objective • The Art of Performance Evaluation • Professional Organizations, Journals, and conferences. • Performance Projects • Common Mistakes and How to Avoid Them • Selection of Techniques and Metrics
Intro & Objective • Performance is a key criterion in the design, procurement, and use of computer systems. • Performance Cost • Thus, computer systems professionals need the basic knowledge of performance evaluation techniques.
Intro & Objective • Objective: • Select appropriate evaluation techniques, performance metrics and workloads for a system. • Conduct performance measurements correctly. • Use proper statistical techniques to compare several alternatives. • Design measurement and simulation experiments to provide the most information with least effort. • Perform simulations correctly.
Modeling • Model – used to describe almost any attempt to specify a system under study. Everyday connotation – physical replica of a system. • Scientific – a model is a name given to a portrayal of interrelationships of parts of a system in precise terms. The portrayal can be interpreted in terms of some system attributes and is sufficiently detailed to permit study under a variety of circumstances and to enable the system’ s future behavior to be predicted.
Usage of Models • Performance evaluation of a transaction processing system (Salsburg, 1988) • A study of the generation and control of forest fires in California (Parks, 1964) • The determination of the optimum labor along a continuous assembly line in a factory (Killbridge and Webster, 1966) • An analysis of ship boilers (Tysso, 1979)
A Taxonomy of Models • Predictability • Deterministic – all data and relationships are given in certainty. Efficiency of an engine based on temperature, load and fuel consumption. • Stochastic - at least some of the variables involved have a value which is made to vary in an unpredictable or random fashion. Example – financial planning. • Solvability • Analytical – simple • Simulation – complicated or an appropriate equation cannot be found.
A Taxonomy of Models • Variability • Whether time is incorporated into the model • Static – specific time (financial) • Dynamic – any time value (food cycle) • Granularity • Granularity of their treatment in time. • Discrete events – clearly some events (packet arrival) • Continuous models – impossible to distinguish between specific events taking place (trajectory of a missile).
The Art of Performance Modeling • There are 3 ways to compare performance of two systems • Table 1.1 System Workload 1 Workload 2 Average A 20 10 15 B 10 20 15
The Art of Performance Modeling (cont.) • Table 1.2 – System B as the Base System Workload 1 Workload 2 Average A 2 0.5 1.25 B 1 1 1
The Art of Performance Modeling (cont.) • Table 1.3 – System A as the Base System Workload 1 Workload 2 Average A 1 1 1 B 2 0.5 1.25
The Art of Performance Modeling (cont.) Ratio Game
Performance Projects I hear and forget. I see and I remember. I do and I understand – Chinese Proverb
Performance Projects • The best way to learn a subject is to apply the concepts to a real-system • The project should encompass: • Select a computer sub-system : a network congestion control, security, database, operating systems. • Perform some measurements. • Analyze the collected data. • Simulate AND Analytically model the subsystem • Predict its performance • Validate the Model.n
Professional Organizations, Journals and Conferences • ACM Sigmetrics : Association of Computing Machinery’s. • IEEE Computer Society – The Institute of Electrical and Electronic Engineers (IEEE) Computer Society. • IASTED – The International Association of Science and Technology for Development
Common Mistakes and How to Avoid Them • No Goals • Biased Goals • Unsystematic Approach • Analysis without understanding The Problem • Incorrect Performance Metrics • Unrepresentative Workloads • Wrong Evaluation Techniques • Overlooking Important Parameters • Ignoring Significant Factors
Common Mistakes and How to Avoid Them • Inappropriate Experimental Design • Inappropriate Level of Detail • No Analysis • Erroneous Analysis • No Sensitivity Analysis • Ignoring Errors in Input • Improper Treatment of Outliers • Assuming No Change in the Future • Ignoring Variability
Common Mistakes and How to Avoid Them • Too Complex Analysis • Improper Presentation of Results • Ignoring Social Aspects • Omitting Assumptions and Limitations.
A Systematic Approach • State Goals and Define the System • List Services and Outcomes • Select Metrics • List Parameters • Select Factors to Study • Select Evaluation Technique • Select Workload • Design Experiments • Analyze and Interpret Data • Present Results
Overview • Key steps in performance evaluation technique • Selecting evaluation technique • Selecting a metric • Performance metrics • Problem of specifying performance requirements
Selecting an evaluation technique • Three techniques • Analytical modeling • Simulation • Measurement
Criteria for selection: Life-cycle stage • Measurements are possible only if something similar to the proposed system already exists. • For a new concept, analytical modeling and simulation are the only techniques from which to choose. • It is more convincing if analytical modeling and simulation is based on previous measurement.
Criteria for selection: Time required • In most situations, results are required yesterday. Then analytical modeling is probably the only choice. • Simulations take long time • Measurements take longer than analytical modeling. • If any thing go wrong, measurement will. • So time required for measurement varies.
Criteria for selection: Availability of tools • Tools include modeling skills, simulation languages, and measurement instruments. • Many performance analysts are skilled in modeling. They would not touch real system at any cost. • Others are not as proficient in queuing theory and prefer to measure or simulate. • Lack of knowledge of the simulation languages and techniques keeps many analysts away from simulations.
Criteria for selection: Level of accuracy • Analytical modeling requires so many simplifications and assumptions. • Simulations can incorporate more details and require less assumptions than analytical modeling and are often close to reality.
Criteria for selection: Level of accuracy (cont.) • Measurements may not give accurate results simply because many of the environmental parameters such as system configuration, type of workload, and time of measurement and so on. • So, the accuracy of results can vary from very high to none with measurement techniques. • Note that, level of accuracy and corectness of conclusions are not identical.
Criteria for selection: Trade-off evaluation • Goal of performance study: compare different alternatives or to find the optimal parameter value. • Analytical models generally provide the best insights into the effects of various parameters and their interactions.
Criteria for selection: Trade-off evaluation • With simulations it is possible to search the space of parameter values for the optimal combination. • Measurement is least desirable technique in this respect.
Criteria for selection: Cost • Measurement requires real equipment, instruments, and time. It is most costly of the three techniques. • Cost is often the reason of simulating complex systems. • Analytical modeling requires only paper and pencils. Analytical modeling is the cheapest technique. • Can be decided based on cost allocated to the project.
Criteria for selection: Saleability • Convincing others is important. • It is easy to convince with real measurement. • Most people are skeptical of analytical measurements, because they do not understand the techniques.
Criteria for selection: Saleability (cont.) • So validation with other technique is important. • Do not thrust the results of simulation model until they have been validated by analytical modeling or measurements. • Do not thrust the results of an analytical model until they have been validated by a simulation model or measurements. • Do not thrust the results of a measurement until they have been validated by simulation or analytical modeling.
Selecting performance metrics • For each performance study, a set of performance criteria or metrics must be chosen. • We can prepare this set by preparing the list of services offered by the system. • The outcomes can be classified into three categories • The system may perform correctly • Incorrectly • Refuse to performs the service.
Selecting performance metrics (cont.) • Example: A gateway in a computer network offers a service of forwarding packets to the specified destinations on heterogeneous networks. When presented with the packet • It may forward the packet correctly • It may forward it to wrong destination • It may be down • Similarly a database may answer query correctly, incorrectly, or may be down.
Selecting metrics: correct response • If the system performs the service correctly, its performance is measured • By the time taken to perform the service. • The rate at which the service is performed • And the resources consumed while performing the service. • These three metrics related to time–rate-resource for successful performance and also called responsiveness, productivity and utilization metrics.
Selecting metrics: correct response • For example, the responsiveness of a network gateway is measured by response time: the time interval between arrival of a packet and its successful delivery • The gateway’s productivity is measured with throughput: the number of packets forwarded per unit time. • The utilization gives the indication of the percentage of time the resources of the gateway are busy for the given load level.
Selecting metrics: incorrect response • If the system performs the service incorrectly, its performance is measured • By classifying errors / packet loss • Determining the probabilities of each class of errors. • For example, in case of gateway • We may want to find the probability of single-bit errors, two-bit errors, and so on. • Also, we may want to determine the probability of a packet being partially delivered.
Time (Response time) Request for Service i Rate (Throughput) Done Correctly Resource (Utilization) Done System Probability Error j Done incorrectly Time between errors Duration of the event Cannot do Event k Time between events The possible outcomes of service request
Metrics • Most systems offer more than one metrics and the number of metrics grows proportionately. • For many metrics mean value is important • Also, variability is important. • For computer systems, shared my by many users, two types of metrics need to be considered: individual and global. • Individual metrics reflect the utility of each user • Global metrics reflect the system wide utility • Resource utilization, reliability and availability are global metrics.
Metrics • Normally, the decision that optimizes individual metric is different from the one that optimizes system metric. • For example, in computer networks the performance is measured by throughput (packets per second). If the number of packets allowed in the system is constant, increasing the number of packets from one source may lead to increasing its throughput , but it may also decrease someone else’s throughput. • So both system wide throughput and its distribution among individual users must be studied.
Selection of Metrics • Completeness: The set of metrics included in the study should be complete. • All possible outcomes should be reflected in the set of performance metrics. • For example, in a study comparing different protocols on a computer network, one protocol was chosen as the best until it was found that the best protocol lead to the highest number of disconnections. • The probability of disconnection was then added to the set of performance metrics.
User’s request System’s response Time Response time Instantaneous request and response Commonly used performance metrics: response time • Response time is defined as the interval between a user’s request and the system response. • This definition is simplistic since the requests as well as responses are not instantaneous.
Throughput • Throughput is defined as the rate (requests per unit of time) at which the requests can be serviced by the system. • For networks, throughput is measured in packets per second or bits per second.
Knee Throughput Usable capacity Knee capacity Load Response time Load Throughput… • Throughput of the system increases as the load on the system initially increases. • After a certain load the throughput stops decreasing. • In most cases it starts decreasing