1 / 29

Varsha Apte Faculty Member, IIT Bombay

PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications. Varsha Apte Faculty Member, IIT Bombay. Ad Server. IMAP server. Web Server. Example: WebMail Application (ready to be deployed). WAN. User request.

tuwa
Download Presentation

Varsha Apte Faculty Member, IIT Bombay

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member, IIT Bombay

  2. Ad Server IMAP server Web Server Example: WebMail Application (ready to be deployed) WAN User request Authentication Server, SMTP Server Several interacting components

  3. Send_to_auth Verify_passwd GenerateHtml Call list_message of IMAP server list_message GenerateHtml Several Usage ScenariosExample: Login Browser Web Authentication IMAP SMTP User/Password 0.2 Response Time 0.8 • Performance Goals during deployment: • User perceived measures: response time, request drops (minimize) • System measures: throughput, resource utilizations (maximize)

  4. Deploying the application in a Data Center Determining host and network architecture How will the network affect the performance? (LAN vs WAN) How many machines? Machine configuration? (how many CPUs, what speed, how many disks?) On which machines should IMAP server be deployed? The Web Server? What should be the configuration of the Web server? (Number of threads, buffer size,…)

  5. Analytical Tool Ref MASCOTS 07 Simulation Tool PerfCenter: Modeling Tool Architect specifies the model Input specifications Machines and Devices Network Params * Deployments Software Components Scenarios PerfCenter generates underlying queuing network model Parser PerfCenter solves the model Queuing Model Inbuilt functions and constructs aid datacenter architect to analyze and modify the model Output Analysis PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter

  6. Capacity analysis for WebMail Graph for Response time performance with increase in number of users Maximum throughput achieved is 30 requests/sec

  7. Autoperf: a capacity measurement and profiling tool Focusing on needs of a performance modeling tool

  8. Input requirement for modeling tools • Usage Scenarios • Deployment details • Resource Consumption Details – e.g. “login transaction takes 20 ms CPU on Web server” • Usually requires measured data

  9. Performance measurement of multi-tier systems Two goals: • Capacity Analysis: • Maximum number of users supported, transaction rate supported, etc • Fine grained profiling for use in performance models

  10. Measurement for Capacity analysis Clients running load generators Generate Requests E.g.: Httperf Flood Silk Performer LoadRunner Servers running system under test System Test Environment

  11. ….Measurement for capacity analysis – Answers provided

  12. Given such data, models can “extrapolate” and predict performance at volume usage (e.g. PerfCenter). Measurement for modeling Resource Consumption profile Clients running load generators 20ms 10ms 40ms LAN Web Server Generate Requests App Server 1 45ms App Server 2

  13. AutoPerf Introducing: AutoPerf Servers Generate Load Profile Servers Client 1 2 Collect client statistics 3 Collect server statistics Correlate & display

  14. AutoPerf Web transaction workload description Deployment Information of servers Fine grained server side Resource profiles

  15. Future enhancements • PerfCenter/AutoPerf: • Various features which make the tools more user-friendly • Capability to model/measure performance of virtualized data centers • Many other minor features • Skills that need to be learned/liked: • Java programming (both tools are in Java) • Discipline required to maintain/improve large software • Working with quantitative data

  16. What is fun about this project? • Working on something that will (should) get used. • New focus on energy and virtualization –both exciting fields • Many, many, algorithmic challenges • Running simulation/measurement in efficient ways

  17. Work to be done by RA • Code maintenance • Feature enhancement • Write paper(s) for publication, go to conferences, present them • Creating web-pages and user groups, answering questions • Help in popularizing the tool, demos, etc • Pick a challenging problem within this domain as M.Tech. project, write paper (s), go to conferences!

  18. Thank you/Questions This research was sponsored by MHRD, Intel Corp, Tata Consultancy Services and IBM faculty award 2007-2009 PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter

  19. Simulator: Queue Class All resources like devices, soft servers and network link are abstracted queues Request Arrival Supports both open and closed arrivals Get SoftServer Get Device Get Network Link Queue Class Instance Free N Buffer Full Y Drop N Y Service Request EnQueue SoftServer Device Service Req NetworkLink Service Req Discrete event simulator implemented in Java

  20. Server1-t Server2-t Server3-t Simulator: Synchronous calls Server1 Server2 Server3 Server4 User PerfCenter Stack Thread Busy Thread waiting

  21. Simulator Parameters • PerfCenter Simulates both open and closed systems loadparms arate 10 end loadparms noofusers 10 thinktime exp(3) end • Model parameters modelparms method simulation type closed noofrequest 10000 confint false replicationno 1 end Independent replication method for output analysis

  22. Deployments dply3 and dply4

  23. Deployment summary

  24. Simulator: Dynamic loading of Scheduling policy /Queue/SchedulingStartergy/ host host[2] cpu count 1 cpu schedp fcfs cpu buffer 9999 end /FCFS.class /LCFS.class host host[2] cpu count 1 cpu schedp rr cpu buffer 9999 end /RR.class

  25. Using PerfCenter for “what-if” analysis • Scaling up Email Application • To support requests arriving at rate 2000 req/sec

  26. Step 1 Step 2 diskspeedupfactor2 =20 diskspeedupfactor3=80 deploy web H4 set H1:cpu:count 32 set H4:cpu:count 32 set H3:cpu:count 12 set H2:cpu:count 12 cpuspeedupfactor1=2 cpuspeedupfactor3=4 cpuspeedupfactor4=2 host H5 cpu count 32 cpu buffer 99999 cpu schedP fcfs cpu speedup 2 end deploy web H5 set H2:cpu:count 32 set H3:cpu:count 18

  27. Summary

  28. Identifying Network Link Capacity

  29. Limitations of standard tools • Do not perform automated capacity analysis • Need range of load levelsto be specified • Need durationof load generation to be specified • Need the stepsin which to vary load to be specified • Report only the throughput at a given load level, but not the maximum achievable throughput and saturation load level. • Tools should take as input a better workload description (CBMG) rather than just the percentage of virtual users requesting each type of transaction. • Do not perform automated fine grained server side resource profiling.

More Related