1 / 13

Generic Environment for Full Automation of Benchmarking

Generic Environment for Full Automation of Benchmarking. Tomáš Kalibera, Lubomír Bulej, Petr Tůma. History: Middleware Benchmarking Projects. Vendor testing Borland, IONA, MLC Systeme Open source testing omniORB, TAO, OpenORB, … Open CORBA Benchmarking

byron
Download Presentation

Generic Environment for Full Automation of Benchmarking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generic Environment for Full Automation of Benchmarking Tomáš Kalibera, Lubomír Bulej, Petr Tůma

  2. History: Middleware Benchmarking Projects • Vendor testing • Borland, IONA, MLC Systeme • Open source testing • omniORB, TAO, OpenORB, … • Open CORBA Benchmarking • anyone can upload their own results Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  3. Motivation: Regression Testing for Performance • Regression testing • integrated into development environment • tests performed regularly • Correctness tests • commonly used • detect bugs • Performance tests • in research stage • detect performance regressions Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  4. Regression Benchmarking • Detection of performance regressions • benchmarking performance of consecutive versions of software • automatic comparison of results • Issues • automatic comparison of results • fluctuations in results • results format, different level of detail • automatic running of benchmarks • monitoring, failure resolution Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  5. Steps of Running a Benchmark download application monitor build application deploy execute collect results build benchmark result repository download benchmark Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  6. Generic Benchmarking Environment • Automated processing • monitoring, handling of failures • management tools • Common form of results • allow benchmark independent analysis • raw data, system configuration • Flexibility • benchmark and analysis independence Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  7. Automatic Downloading and Building • Download methods • cvs checkout, http, ftp • Build methods • Ant, make, scripts • support different platforms • Software repository • storage for sources, binaries • annotated for future reference download application build application build benchmark download benchmark Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  8. Automatic Deployment • Reproducibility • Platform dependencies • CPU type • operating system • Resource requirements • CPU frequency • RAM • Software requirements • database server • web server deploy Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  9. Automatic Execution monitor • Multiple applications • run in correct order • wait for initialization • Monitoring • detect crashes • detect deadlocks • but do not distort the results ! execute Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  10. Architecture of Benchmarking Environment • Task processing system • deployment, execution, monitoring of tasks • task scheduler – dependencies on other tasks, checkpoints • jobs, services • Environment tasks • result repository, software repository • Benchmarking tasks • benchmarks, compilations, required apps Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  11. Example: RUBiS Benchmark wait_for_done wait_for_done TASK: Client Emulator (job,prepared) PROCESS: Client Emulator TASK: Fill Database (job,done) TASK: Compile Beans (job,done) TASK: Deploy Beans (job,running) wait_for_up wait_for_done TASK PROCESSING SYSTEM wait_for_up wait_for_up TASK: Database (service,up) TASK: EJB Server (service,up) CLIENT HOST wait_for_up TASK: Result Rep. (service,up) TASK: Resource Mgmt. (service,up) TASK PROCESSING SYSTEM PROCESS: MySQL Server PROCESS: Jonas EJB Server TASK PROCESSING SYSTEM CONTROL HOST SERVER HOST Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  12. Conclusion & Future Work • Generic benchmarking environment • automatic running of (existing) benchmarks • common form of results, result repository • Current status • early implementation phase • Future work • support for Xampler, RUBiS benchmarks • automatic detection of regressions • regression benchmarking of CORBA, EJB Tomáš KaliberaSOQUA 2004, Erfurt, Germany

  13. Publications • Bulej L., Kalibera T., Tůma P.: Repeated Results Analysis for Middleware Regression Benchmarking, accepted for publication in Special Issue on Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems, in Performance Evaluation: An International Journal, Elsevier • Bulej, L., Kalibera, T., Tůma, P.: Regression Benchmarking with Simple Middleware Benchmarks, in proceedings of IPCCC 2004, International Workshop on Middleware Performance, Phoenix, AZ, USA • Buble, A., Bulej, L., Tůma, P.: CORBA Benchmarking: A Course With Hidden Obstacles, in proceedings of the IPDPS Workshop on Performance Modeling, Evaluation and Optimization of Parallel and Distributed Systems (PMEOPDS 2003), Nice, France • Tůma, P., Buble, A.: Open CORBA Benchmarking, in proceedings of the 2001 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS 2001), published by SCS, Orlando, FL, USA Tomáš KaliberaSOQUA 2004, Erfurt, Germany

More Related