1 / 11

Cloudstone: Multi-platform, Multi-language Performance Measurement for Web 2.0

.

raghnall
Download Presentation

Cloudstone: Multi-platform, Multi-language Performance Measurement for Web 2.0

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Cloudstone: Multi-platform, Multi-language Performance Measurement for Web 2.0 Will Sobel, Shanti Subramanyam*, Akara Sucharitakul*, Jimmy Nguyen, Hubert Wong, Arthur Klepchukov, Sheetal Patil*, Armando Fox, David Patterson UC Berkeley RAD Lab & *Sun Microsystems

    2. “For better or worse, benchmarks shape a field.” — David Patterson Goal of this talk: feedback on proposed methodology for how and what to measure for interactive apps in cloud-like environments

    3. Toward Web 2.0 & Cloud Computing Benchmarks: Goals Goals: reproducible & consistent systematic exploration of design space of deployment & optimization choices end-to-end, full stack (vs. microbenchmark) Approach: realistic Web 2.0 app flexible, realistic workload generator suggested metrics & reporting methodology

    4. Olio social events app

    5. Cloudstone 100% open source (Apache Foundation) Olio, Rails & PHP versions of social events app Events, users, comments, tag clouds, AJAX representative “sound practices” on each stack Faban, Markov-based workload generator per-operation latency SLA’s time-varying workloads, distributed operation instrumentation: meet 90% or 99%ile for all per-op SLA’s over all 5-minute windows Big caveat: no “apples to apples” comparison lots of many-many relationships lots of many-many relationships

    6. Concurrent Users/$/Month Max # concurrent users per dollar per month, while meeting 90% & 99% SLA fo response time Report 1, 12, 36 months captures cap-ex/depreciation, long-term contracts, etc. Report log10(size of user base) avoids unfair advantage for very large sites

    7. Trial on EC2 1 EC2 “compute unit”~1GHz Opteron M1.XLarge: 15GB RAM, 8 CU (4 cores) C1.XLarge: 7 GB RAM, 20 CU (8 faster cores) Both cost 0.80 per instance-hour

    8. EC2: users/$/month

    9. EC2: max concurrent users

    10. Conclusions Goal: reproducibly/systematically explore degrees of freedom of cloud deployment with realistic Web 2.0 app & workload Non-goals “apples/apples” —not clear if meaningful “best/fastest” app implementations Desired feedback on benchmark metric of merit feedback on suggested benchmarking procedure (details in paper) measurements on other cloud (& noncloud) platforms

    11. Download/Contribute Help us measure other platforms... Ruby interpreters (Ruby 1.9, JRuby, ruby.NET) MERB Python/Django Grails, J2EE Full caching for PHP AMI’s, more details, etc: http://radlab.cs.berkeley.edu/wiki/Projects/Cloudstone Olio, Faban available now; other stuff soon

More Related