1 / 0

Welcome to H2KInfosys

Welcome to H2KInfosys. H2K Infosys is E-Verify business based in Atlanta, Georgia – United States www.H2KINFOSYS.com USA - +1-(770)-777-1269 , UK - (020) 3371 7615 Training@H2KInfosys.com / H2KInfosys@Gmail.com. Why H2KInfosys.

nili
Download Presentation

Welcome to H2KInfosys

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome to H2KInfosys
  2. H2K Infosys is E-Verify business based in Atlanta, Georgia – United States www.H2KINFOSYS.com USA - +1-(770)-777-1269 , UK - (020) 3371 7615 Training@H2KInfosys.com / H2KInfosys@Gmail.com
  3. Why H2KInfosys 100% Job Oriented Instructor Led Face2Face True Live online Software Training + Cloud Test Lab with Software Tools & Live Project work + Mock Interviews + Resume Prep & Review + Job Placement assistance= Better than on site IT training Trusted by many students world wide.
  4. Agenda Introduction to performance testing What Why Types of performance testing Performance Testing Approach Performance - Quality Aspect Performance testing terminology Performance Counter Software Hardware Client Side Server Side
  5. AGENDA Scenarios What & How Performance Testing process Performance Requirements Performance Test Planning Performance Lab What it is ? Various Components Performance Test Scripting Performance Test Execution Metrics Collections Result Analysis Report Creation Q & A Workload What & Why Types of Workload
  6. Performance Testing – What? “ Performance Testing is the discipline concerned with determining and reporting the current performance of a software application under various parameters. “
  7. Performance Testing – Why? Primarily used for… verifying whether the System meets the performance requirements defined in SLA’s determining capacity of existing systems. creating benchmarks for future systems. evaluating degradation with various loads and/or configurations.
  8. Performance Testing Types Load Test Objective:  To get the insight into the performanceof the system under normal condition Methodology: The user behaviors are modeled as per the real world. Test script  mimics the activities that the users commonly perform, and include think time delays and arrival rates reflective of those in the real world
  9. Performance Testing Types Stress Test Objective: The application is stressed with unrealistic load to understand the behavior of the application in the worst case scenario Methodology:  In stress tests, scripted actions are executed as quickly as possible. Stress testing is load testing with the user think time delays removed.
  10. Performance Testing Approach Scalability Scalability is the capacity of an application to deal with an increasing number of users or requests without degradation of performance. Scalability is a measure of application performance.
  11. Performance Testing Approach Stability The stability (or reliability) of an application indicates its robustness and dependability. Stability is the measure of the application’s capacity to perform even after being accessed concurrently by several users and heavily used. The application’s stability partly indicates its performance.
  12. 3 Performance Quality Attributes Performance Effectiveness Scalability Quality Aspect Performance 5 Quality Aspects F unctionality U sability R eliability P erformance S upportability
  13. Quality Aspect Performance Performance Online Transactions Response times (seconds) Batch Transactions Runtime (minutes or hours) Throughput (items / second)
  14. Quality Aspect Performance Effectiveness CPU Utilization Real User and System Time Memory Usage (Mb) Network Load Package Size and Package Count Band Width and Latency Disk Usage (I/O))
  15. Quality Aspect Performance Scalability Online Transactions Large # of Users Batch Transactions Large Transaction Volumes Large Data Volumes Scale in/out
  16. Performance testing terminology Scenarios Sequence of steps in the application under test For e.g. searching a product catalog Workload It is the mix of demands placed on the system (AUT), for e.g. in terms of concurrent users, data volumes, # of transactions Operational Profile (OP) List of demands with frequency of use Benchmark Standard workload, industry wide (TPC-C,TPC-W) TPC- C : An on-line transaction processing benchmark. TPC-W : A transactional web e-Commerce benchmark
  17. Users finishes request System's starts response System's starts execution Users starts request Users starts request Reaction time Response time Performance testing terminology System's completes response Think time
  18. Performance testing terminology Throughput Rate at which the requests can be serviced by the system Batch streams Jobs /sec Interactive systems Requests /sec CPU Million Instructions/sec (MIPS) Million Floating-Point operations per sec Network Packets per second or bits per second Transaction processing Transaction per second
  19. Performance testing terminology Bandwidth A measure of the amount of data that can travel through a network. Usually measured in kilobits per second (Kbps). For example, a modem line often has a bandwidth of 56.6 Kbps, and an Ethernet line has a bandwidth of 10 Mbps (10 million bits per second). Latency In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages (for example, AT&T), latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency. Propagation (@ speed of light) + Transmission( ~ Size) + Router Processing (examining) + Other computer and Storage (Switch or bridge) Delays
  20. Performance testing terminology Reliability Mean time between failure Availability Mean time to Failure Cost/performance Cost  Hardware/software licensing, installation and maintenance Performance, (usable capacity)
  21. What to watch S/W Performance OS Application Code Configuration of Servers H/W Performance CPU Memory Disc Network
  22. Performance counters Why Performance Counters? They allow you to track the performance of your application What Performance Counters? Client-Side Response Time Hits/sec Throughput Pass Fail Statistics Server-Side CPU - %User Time, %Processor Time, Run Queue Length Memory – Available and Committed Bytes, Network – Bytes – Sent/sec, Received/sec Disc – Read Bytes/sec, Write bytes/sec
  23. Client side metrics Hits per Second: The Hits per Second graph shows the number of hits on the Web server (y-axis) as a function of the elapsed time in the scenario (x-axis). Hits per Second graph can be compared to the Transaction Response Time graph to see how the number of hits affects transaction performance. Pass-Fail Statistics: It gives the measure of application capability to function correctly under load. It is measured by measuring transaction pass/fail/error rates. 
  24. workload Workload is the stimulus to system. It is an instrument simulating the real world environment. The workload provides in depth knowledge of the behavior of the system under test. It explains how typical users will use the system once it goes in production. It could include all the requests and/or data inputs. Request may include things such as : Retrieving data from a database Transforming data Performing calculations Sending documents over HTTP Authenticating a user, and so on.
  25. workload Workload may be no load, minimal, normal, above normal or extreme. Extreme loads are used in load stress testing - to find the breaking point and bottlenecks of tested system. Normal loads are used in performance testing - to ensure acceptable level of performance characteristics like response time or request processing time under estimated load. Minimal loads are usually used in benchmark testing - to estimate user experience.
  26. workload Workload is identified for each of the scenarios. It can be identified based on following parameters: Numbers of users. The total number of concurrent and simultaneous users who access the application in a given time frame. Rate of requests. The requests received from the concurrent load of users per unit time. Patterns of requests. A given load of concurrent users may be performing different tasks using the application. Patterns of requests identify the average load of users, and the rate of requests for a given functionality of an application. Steady State Steady state workload is the simplest workload model used in load testing. A constant number of virtual users are run against the application for the duration of the test.
  27. Workload models Steady State Steady state workload is the simplest workload model used in load testing. A constant number of virtual users are run against the application for the duration of the test. Increasing Increasing workload model helps testers to find the limit of a Web application’s work capacity. At the beginning of the load test, only a small number of virtual users are run. Virtual users are then added to the workload step by step.
  28. Workload models Dynamic Dynamic workload model, you can change the number of virtual users in the test while it is being run and no simulation time is fixed.
  29. Workload Profile A workload profile consists of an aggregate mix of users performing various operations. Workload profile can be designed by performing following activities: Identify the distribution (ratio of work). For each key scenario, identify the distribution / ratio of work. The distribution is based on the number of users executing each scenario, based on application scenario. Identify the peak user loads. Identify the maximum expected number of concurrent users of the Web application. Using the work distribution for each scenario, calculate the percentage of user load per key scenario. Identify the user loads under a variety of conditions of interest. For instance, you might want to identify the maximum expected number of concurrent users for the Web application at normal and peak hours.
  30. Workload Profile For sample web application, the distribution of load for various profiles could be similar to that shown in table below: Number of users: 200 simultaneous users Test duration: 2 hours Think time: Random think time between 1 and 10 seconds in the test script after each operation Background processes: Anti-Virus software running on test environment
  31. Scenarios what and how Decide on the business flows that needs to be run Decide on the mix of the business flows in a test run Decide on the order of the test scripts that need to be started Decide on the ramp up for each business flow and test run duration The load generators and test scripts (user groups) for a scenario should be configured so that the scenario accuratelyemulates the workingenvironment. The Runtime settings and test system/application configuration can be changed for creating different scenarios for the same workload profile
  32. Performance engagement process Query P E R F O R M A N C E L A B P R O J E C T T E A M Requirements questionnaire Response Test plan & engagement Contract Signed contract, approved Test Plan & application demo Business flow document & project plan Written approval Test execution reporting Customer feedback & project closure
  33. Performance test process Initiate Plan Design Execute Report Iteration 1 Iteration 2 App Demo Business Flow Test Plan Access to Staging Lab Design Script design Data Collection Analysis & Report Activities: Fill performance require- ments questionnaire Finalize estimates and Service Plans Prepare engagement Contract Reviews by Project Team Activities: Establish Test Goals Prepare Test Plan Reviews by Project Team Activities: Execute performance tests Collect performance metrics Reviews by Project Team Activities: Application Walkthrough Freeze workload Set up master data Create performance scripts Reviews by Project Team Activities: Analyze test results Prepare performance test Report Reviews by Project Team Deliverables: First Information Report Deliverables: Performance Test Report Deliverables: Performance Test Plan Deliverables: Performance test scripts Deliverables: Signed engagement contract Project Team Performance Test Lab
  34. Performance requirements Performance Test Objective Forms the basis for deciding what type performance test needs to be done The test plan and test report should reflect this objective Performance Requirements Expected performance levels of the application under test Can be divided into two categories Performance Absolute Requirements Performance Goals Performance Absolute Requirements Includes criteria for contractual obligations, service-level agreements (SLA) or fixed business needs Performance Goals Includes criteria desired in the application but variance in these can be tolerated under certain circumstances Mostly end-user focused
  35. Performance requirements Determine Purpose of the system High level activities How often each activity is performed Which activity is frequently used and intensive (consume more resources?) hardware & software architecture Production architecture Should be specific Should detail the required response times throughput or rate of work done Under specific user load (normal/peak) Specific data requirements Can mention resource utilization desired/thresholds Analyze Business Goals and objectives
  36. Performance requirements Where do we need to set the goals? Which 5% of system behavior is worth defining with respect to performance Which users have priority, what work has priority? Which processes has business risks (loss in revenue) How aggressive do the service levels need to be? Differentiators Competitor Productivity gains What are the demanding conditions under which SUT would be put in? What are the resource constraints (Already procured hardware, network, Disk etc or sharing the same environment with others)? What are the desired norms of resource utilization? Desired norms for reserve or spare capacity What are the trade offs (throughput vs response time), availability etc Where to get information Stakeholders Product managers Technical/application architects Application users Documentation (User Guide, Technical design etc)
  37. Performance requirements Performance requirements template Objectives Performance Requirements System deployment architecture System configuration Workload profile Client network characteristics
  38. Performance TEST PLAN Contents Requirements/Performance Goals Project scope Entry Criteria E.g: All found defects during system testing phase have been fixed and re-tested. Code freeze is in place for the application and the configuration is frozen. The Test environments are ready Master data has been created and verified No activities outside of performance testing and performance test monitoring will occur during performance testing activities Exit Criteria E.g.: All planned Performance tests have been performed. The Performance Test passes all stated objectives at maximum numbers of users Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.
  39. Performance TEST PLAN Architecture overview This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment Performance test process This includes a description of: User scenarios Tools that will be used. User ratios and sleep times Ramp up/Ramp down pattern. Test Environment Test Bed Network Infrastructure Hardware Resources Test Data
  40. Performance TEST PLAN Test Environment Test Bed Describe test environment for load test and test script creation. Describe whether the environment is a shared environment and its hours of availability for the load test. Describe if environment is production-like or if it is actually the production environment. Network infrastructure Describe the network segments, routers, switches that will participate in the load test. It can be described with network topology diagram. Hardware resources Describe machine specifications that are available for the load test such as machine’s name, its memory, processor, environment (i.e. Win 2000, win XP, Linux, etc), whether the machine has the application that needs to be tested installed (AUT = Application under test), and the machine’s location (what floor, what facility, what room, etc) Test Data Database size and other test data. Who will provide the data and configure the test environment with appropriate test data.
  41. Performance TEST PLAN Test Environment Staffing and Support Personnel List the individuals participating during the load tests with their roles and level of support. Deliverables Description of deliverables such as test scripts, monitoring scripts, test results, reports –explanation of what graphs and charts will be produced. Also explain who will analyze and interpret the load test results and review the results with the graphs from all participants monitoring the execution of the load tests. Project Schedule Timelines for test scenario design, test scripting Communication Plan Project Control and Status Reporting Process
  42. Performance TEST PLAN Test Environment Risk Mitigation Plan E.g of risks: The software to be tested is not correctly installed and configured on the test environment. Master data has not been created and/or verified. Available licenses are not sufficient to execute performance tests to validate the application. Scripts and/or scenarios increase from Performance Test Plan Assumptions E.g.: Client will appoint a point of contact to facilitate all communication with project team and provide technical assistance as required by personnel within four (4) business hours of each incident. Any delays caused would directly affect the delivery schedule Client will provide an uninterruptible access to their application during the entire duration of this assignment The Client application under test use HTTP protocol. Client application will not contain any Java Swing, AJAX, Streaming media, VBScript, ActiveX or any other custom plug-ins. Presence of any such components will require re-visit to effort and cost estimation
  43. Performance TEST LAB Virtual User – Emulate the end user action by sending request and receiving response Load Generator - emulates the end-user business process Controller - organizes, manages and monitors the load test - Probes – captures a single behavior when load test is in progress
  44. Performance TEST scripting Correlations Correlation is done to resolve dynamic server values Session IDs Cookies LoadRunnersupports automatic correlation Parameterization Parameterization is done to provide each vuser with unique or specific values for application parameters Parameters are dynamically provided by LR to each vuser or they can be taken from data files Different types of parameters are provided by LR for script enhancements Transactions Transactions are defined to measure the performance of the server. Each transaction measures the time it takes for the server to respond to specified Vuser requests. The Controller measures the time taken to perform each transaction during the execution of a performance test. Rendezvous points Verification Checks
  45. Performance TEST scripting Rendezvous points Rendezvous points are used to synchronize Vusers to perform a task at exactly the same moment – to emulate heavy user load. When a Vuser arrives at a rendezvous point, it is held by the Controller until all Vusers participating in the rendezvous reach that point. You may only add rendezvous points in the Actionsection – not to the init or end Verification Checks Add verification checks to the script Text verification checks Image verification checks
  46. Performance TEST Execution Before Execution, Ensure Test system is ready for test Test data is in place System configuration is as per the plan Smoke test is done Scenario scheduling is correct Ramp up/Run for some time/Ramp down Schedule according to groups Number of Vusers for each group is correct for that test run Monitors are ready to collect performance data Debugging statements in the script are commented / Logging is done only on necessity Load Generators Load Generating machines Divide load among different machines Log information about each test run Allow test system to stabilize for each test run Store results for each test run in a separate folder Check if Test system state is per the plan after each test run
  47. Metrics collection Client side metrics Response time Throughput Provided by the test tool Server side metrics Perfmon (for Windows) System commands ( for Linux/UNIX) Test tool monitors JMX counters Application servers Database Servers Web Serves Scripts using Perl/shell/... for collecting metrics and formatting data
  48. Result analysis Response Time : lower is better Throughput : higher is better Throughput increases as load increases and reaches a saturation point beyond which any further load increases the response time exponentially This is called the knee point User Load Vs CPU utilization Should be linear If 100% CPU usage reached before the expected user load CPU is the bottleneck Increase the number of CPUs Use a faster CPU check for Context Switches and Processor Queue Length If 100% CPU usage not reached then check for bottlenecks in other system resources
  49. Result analysis Response Time : lower is better User Load Vs Disk I/O Check for Average Disk Queue Length and Current Disk Queue Length Check for % Disk Time Check for Disk Transfers/sec Memory utilization Vs Time check for Available memory and amount of swapping Memory usage should get stabilized after some into the test If memory usage increases with each test run or for each iteration of an activity for the same number of users and doesn't come down then it could be a possible indication of memory leak Network utilization current bandwidth, packets/sec, packets lost/sec
  50. Report creation The end deliverable of performance test Very important from stakeholders point of view Should reflect the performance test objective Provide test results in tabular format and graphs Include all issues faced during testing Document all the findings & observations ( performance testing is close to research) Load and especially stress would sometime reflect the bad side of an application. It throws errors, capture all of them for future analysis Any deviations/workarounds used should be mentioned Contents of Test Report Executive Summary - Test Objective - Test Results Summary - Conclusions & Recommendations Test Objective Test Environment Setup - Software configuration used including major and minor versions where applicable - Hardware configuration Business flows tested / test scenarios Test run information including observations Test results Conclusions & recommendations
  51. For Registrations OR Thanks Deepa
More Related