220 likes | 373 Views
Performance Testing Apps. Christine Jackson Regional Manager Software Quality Assurance & Testing Practice April 9, 2003. Today’s Topics. Industry Quality Metrics What is Different? Risk Mitigation Approaches Case Studies Keys to Successful Tests. Industry Metrics.
E N D
Performance Testing Apps Christine Jackson Regional Manager Software Quality Assurance & Testing Practice April 9, 2003
Today’s Topics • Industry Quality Metrics • What is Different? • Risk Mitigation Approaches • Case Studies • Keys to Successful Tests
Industry Metrics Is there really a Software Quality Problem? • According to a federal study, buggy softwarecosts the U.S. economy nearly $60 billion a year. (May, 2002) www.nist.gov/director/prog-ofc/report02-3.pdf • “Agilent Technologies last week joined the list of companies blaming balky software for crimping the bottom line. Agilent's claim followed similar charges from Hershey, W.L. Gore & Associates, W.W. Grainger, Whirlpool, Foxmeyer and Nike--all of which in the past few years have cited botched software installations for disrupting operations to the degree their finances suffered.” CNET News.com August 27, 2002
55% of the defects are traceable back to the Requirements & Design phases Industry Metrics Software is complex and errors propagate throughout the development process. By the way, these statistics come from companies with metrics… Source: Capers Jones, SPR, Inc.
Industry Metrics • For most, the typical defect discovery bubble occurs just prior to implementation, or worse…just after implementation. Both incur the greatest costs to correct the defects. • With early involvement of QA, and the correct processes and procedures, we can drive your discovery bubble earlier in the lifecycle. Thereby reducing your costs to correct defects. If 55% of defects are introduced in the first two stages of a project, there are early discovery opportunities $ $$$ Start -- Project Life Cycle -- Finish
Industry Metrics $ $$$ StartFinish Where are the Metrics for Performance & Load Testing? Low-volume Beta Test = <10 participants High-volume Beta Test = >1,000 participants Source: Capers Jones, SPR, Inc.
What’s Different? ..So What's Different? • Mainframe Apps • System Programmer • DBA • Mostly Batch • Throw more Horsepower into it • Small number of machines
What’s Different? Today’s Java World is More Complex • Enterprise Integration • Java Apps • Developer / Architect • DBA • Distributed Configuration • Minimal Batch • Garbage Collection, Heap Size • Multiple Threads, but sequenced • Multiple Connections • Multi-phase Commits • Routers, Hubs, Switches • Portals, SSL • Multiple Browser Flavors • Scaling up Small Dept. Apps • Access from anywhere, anytime, anyway, & don’t make me wait
Risk Mitigation Risk Mitigation ? • Testing is Really About Risk MitigationKey Risks to Mitigate • Risk to Brand • Risk to Revenue • Regulatory or Litigation Risk • Risk to Expense • Risk to Resume
Risk Mitigation Risk Mitigation Approaches via Load & Performance Testing • Workload must be representative • Provide adequate functional coverage • Realistic implementation & usage patterns • Relevant to the goals • Workload must be measurable • Have well-defined metrics • Provide a stable measurement period to gather statistics • Key Metrics • Throughput • Response time • Number of simultaneous requests (Injection rate or # of concurrent users)
Risk Mitigation Risk Mitigation Approaches cont’d. • Workload must be repeatable • If the test adds or modifies data, DB needs to be restored each time • Test cases should be repeatable, and controlled • Environment must be as close and true to production as possible • Hardware, Software, & Infrastructure • Data • Functionality Scenarios
Risk Mitigation Establish a baseline Collect Data Apply Solution Identify Hurdles Identify Alternatives Methodology Define your performance targets, configuration, initial settings Use stress tests & performance monitoring tools/techniques Repeat cycle until satisfied Analyze the data collected to identify potential performance impairments Make your changes. Be careful to limit the amount of change introduced Identify, research, prioritize, & select mitigation alternatives
Risk Mitigation Establish a baseline Collect Data Apply Solution Identify Hurdles Identify Alternatives Establish a baseline • Performance Requirements are critical • Throughput(number of operations per unit of time) • Response time(amount of time that it takes to process individual transactions) • Number of simultaneous requests (Injection rate or # of concurrent users) • Consider two sets of requirements • Launch • Optimal
Risk Mitigation Establish a baseline Collect Data Apply Solution Identify Hurdles Identify Alternatives Collect Data • Generating Controlled Load • Mercury Loadrunner, Winrunner • Compuware QALoad, QARun • Others or home-grown • Collection Points • Servers stats • Weblogic stats • DB Queries • Network Latency • Application Threads • How?
Risk Mitigation Tiny Sample - ApplicationVantage
Risk Mitigation Establish a baseline Make your changes. Be careful to limit the amount of change introduced Analyze the data collected to identify potential performance impairments Collect Data Identify, research, prioritize, & select mitigation alternatives Apply Solution Identify Hurdles Identify Alternatives Next 3 Steps • Partnership with: • Architects • Developers • DBAs • Network Engineers • Business SMEs • ISP • Server Admins • System Programmers • Test Architect • Project Manager • Avoid a large number of changes at once…Think Cause & Effect
Case Study Phased Reengineering (Mainframe to Distributed) • Challenge • Large high-transaction mainframe system reengineered in phases to a distributed, browser-based system. Maintain integration with mainframe applications until they are reengineered. • Approach • Supplied team of Java Architect, SW Engineer, Test Architect, & Automation Engineer. Reviewed design, instrumented & profiled the code, identified performance opportunities, plus performance tested the system • Outcome & Lessons Learned • Recommendations made to development included coding techniques, optimization of a problematic query, plus knowledge transfer of how & where to profile. • Defined, coordinated, facilitated, and conducted all aspects of the performance test. Worked with development to ensure changes were implemented. • Significant functional problems were discovered under load that could not be identified during a typical functional test, including sequencing issues. • One server identified as having borderline capacity during extreme peak loads. • New systems must be performance tested prior to implementation • Dedicated performance environment is established complete with regression test cases.
Case Study Slow Portal • Challenge • New application coming online, with access through existing Portal. Significant user load expected, but Portal will not handle the growth. Sub-second response time inside firewall, 8 seconds to minutes through portal. Current Security policies, SSL inhibited significant tool diagnostics • Approach • Created load with QALoad, & QARun. Tested remotely from SPR to simulate outside access. Placed additional players just outside firewall. Utilized ApplicationVantage, ServerVantage, & mass-manual testing. Coordination between networking, application development, infrastructure, Sun Services, and QA. • Outcome & Lessons Learned • Portal response time now meets performance requirements • Just throwing larger servers at a problem does not automatically fix the problem • Sometimes cultures cannot accept what they do not understand. Did not understand the internals of a tool. • Many hurdles were identified including threads, heap size, garbage collection, java version, and a defective hub creating significant volume of retries • Even a Portal needs a Performance Test Environment separate from Production
Case Study Pre-Purchase Decision • Challenge • Subscriber System needed replacing. Build or Buy? Package looked good but would it scale to support the needs of a paper larger than any other? • Approach • Install a vanilla instance out of the box. Defined critical success business requirements. Utilized Loadrunner & Winrunner to simulate user interaction. Recorded response times, size requirements, other options. • Outcome & Lessons Learned • Package scaled to support the projected subscriber growth for the next 10 years • Performance degraded significantly when free disk space went below 50% • The business case was adjusted to double the disk space avoiding a potential implementation project cost overrun • Performance baseline was established for acceptance testing and written into the implementation services contract • A head start on a solid regression test bed
Begin with the End in Mind Define your Test Requirements - What is Important Define launch criteria, & optimal criteria Simulate Production as Close as Practical Hardware, Software, & Infrastructure Data Functionality Scenarios Conducting the Test Have Functioning code Test under load Utilize automation – you’re going to repeat these tests Benchmark You are not alone Consider the culture Partner with Architect, Developer, Test Automation, Business SMEs… Compromise carefully – remember the risks being mitigated Keys to Successful Tests Keys to Successful Tests