290 likes | 421 Views
Efficient Application Placement in a Dynamic Hosting Platform. Zakaria Al- Qudah , Hussein A. Alzoubi , Mark Allman, Michael Rabinovich , Vincenzo Liberatore Case Western Reserve University Proceedings of the 18th international conference on World wide web . Outline. Introduction
E N D
Efficient Application Placement in a Dynamic HostingPlatform ZakariaAl-Qudah, Hussein A. Alzoubi, Mark Allman, Michael Rabinovich, Vincenzo Liberatore Case Western Reserve University Proceedings of the 18th international conference on World wide web
Outline • Introduction • Motivation • Architecture overview • Placement Mechanisms • Experiments • Related work • Future work and conclusion • Comments
Introduction • Web hosting has become a $4Billion industry [27] and a crucial part of Web infrastructure. • The current prevalent practice: • Allocating (or providing facilities for) fixed amounts of resources to customers. • However, in the near future: • Dynamic hosting. • A hosting platform maintains a shared pool of resources. • Reassigns these resources among applications to absorb demand spikes for different applications at different time. • Advantages: • Better resource utilization • Lower cost through the economy of scale. [27] Yankee group’s SMB survey suite offers the most comprehensive view of the SMB market. CRM Today. http://www.crm2day.com/content/t6_librarynews_1.php?news_id=116800, 2005.
Approach to implement dynamic hosting • Most research work of dynamic hosting focus on: • Application placement algorithm: • Dynamic placement of a variable number of application instances on physical servers. • Decide on the number and location of the application instances. • This paper focused on: • An efficient way to enact these decisions once they are made. • The agility of hosting platform • E.g. in the case of a flash crowd if the platform decides to add more resources to an overloaded application, we would like this decision to be enacted as fast as possible. • The authors call it “orthogonal question”.
Motivation • Motivation: • Application placement takes in the order of minutes and, as we show, is resource intensive. • Minimize the rate of application placements to reduce their effects? • Goal: • Reduces dramatically these costs. • Enable a new class of more agile application placement algorithms.
Architecture Overview(1) • Targeting web application: • Three-tier applications: • Web tier, an application tier, and the back-end database tier • Resource allocation methods in shared hosting platform: • One physical machine Only one application • Central controller may switch a machine from one application to another. • One physical machine Many applications • Share resource at the level of OS processes. • Running each application instance inside its own virtual environment and sharing physical machines. • Basic execution unit: • Application server(e.g. an Apache instance). • Abbr. : AppServ or Application
Placement Mechanisms Introduce and compare many placement mechanisms including the new one the author proposed
Placement Mechanisms(1)Regular Startup • Regular Startup: • This motivates our search for more efficient application placement techniques. • Mechanism: • Pre-deploy all applications everywhere, and simply start and stop AppServ instances as needed. • Disadvantage: • High startup time • High resource consumption
Placement Mechanisms(2)Run-Everywhere • Run-Everywhere: • A straightforward alternative to regular startup • Mechanism: • Run an instance of every application on every available server machine. • Simply direct requests to application instances designated to be active. • Disadvantage: • Overhead: • OS cannot distinguish “active” and “idle” application. • E.g. AppServ performing regular housekeeping that makes them appear “active” to the OS.
Placement Mechanisms(3)Suspend/Resume • Suspend/Resume: • Improve from Run-everywhere • Mechanism: • Allowing the platform to explicitly indicate which AppServs should be active/idle. • The local controller responds to the Start() and Stop() commands by issuing SIGCONT and SIGSTOP signals respectively to the designated process. • Issues: • How to make “idle” application consume minimal resource • How to minimize the time from suspend to resume.
Placement Mechanisms(4)Enhanced Suspend/Resume • Enhanced Suspend/Resume: • Enhance the agility of suspend/resume • Issues and Approaches: • When paging-in the memory pages of a resumed process, the operating system brings them into main memory on-demand • (i.e., only when the resumed process generates a page fault). • CPU-bound and I/O-bound. • Sol: Pre-fetch the memory pages of the process to be resumed in bulk before waking it up with the SIGCONT signal. • Need to modify OS kernel.
Placement Mechanisms(4)Enhanced Suspend/Resume • Issues and Approaches(con’t): • A resumed process will be delayed if the operating system needs to free memory to accommodate the resumed AppServ. • Suspend AppServ still consume memory. • Sol: We free this memory by pre-purging entire suspended AppServfrom memory to disk. • A recently suspended AppServ will not be activated for a long (in computing terms) time. • Bulk pre-purging is likely to place all pages of the AppServ on disk close to each other. • The activation should be faster because of fewer disk head movements.
Performance Show all application placement mechanisms performance Regular Startup Run-Everywhere Regular and Enhanced Suspend/Resume
Testbed • Hosting Server*1 • 2.8 GHz Intel Pentium 4 CPU • 512 MB of memory • Linux kernel 2.6.21. • Two disks: • one hosting a large swap area (13 GB) • and the other holding the application files. • Application Server(Execution Unit) • JBoss
Testbed(con’t) • Three sub-applications: • TPC-W with 1000 items----Represent real applications • The transactional Web e-Commerce benchmark TPC-W • A synthetic application that imposes a tunable CPU load • Defines a single request type that contains a loop consuming a number of CPU cycles. • A synthetic application with a tunable memory footprint • One type initializes a large static array of characters on the first request and then touches all those characters on subsequent requests. • The other is a null request. • Request Generator: • wget(http://www.gnu.org/software/wget/) • Note: • All experiments in this section are repeated ten times
ExperimentRegular Startup(1) • Objective: observe startup delay when CPU Load is increasing. • Steps: • We start a copy of JBoss containing all our test applications. • Apply a certain request rate to the CPU-bound application to generate the desired CPU load. • Start a second instance of JBoss and observe its startup delay. Startup time = the time Jboss reports ready-Local controller receives the Start() message. • Conclusion: • Startup delay is high when CPU load is high. • Imagine when you want to assign an memory intensive application to this machine……
ExperimentRegular Startup(2) • Objective: observe resource consumption when a new instance starts. • Step: • Start an AppServ’s on an idle machine. • Observe CPU usage and I/O wait. • About I/O wait: • Defined as the percentage of time that the CPU was idle during which the system had an outstanding disk I/O request. • Implication of disk usage. • Conclusion: • Regular startup may consume huge resources.
ExperimentRun-Everywhere • Objective: • Observe resource consumption of idle applications • Steps: • We start a JBoss instance and wait until it reports startup completion. • Monitor the overall CPU and disk usage on the server for the next 100 seconds before we start another JBoss instance. • Conclusion: AppServs never become truly idle from the OS’s perspective
Experiment: Regular and Enhanced Suspend/Resume • Objective: • observe how free memory space affect Resume delay in two approaches. • Steps for two approaches: • Resume delay: • From:Send out resume message • To: Completion of a small application request to indicate that the application instance is active.
Experiment: Regular and Enhanced Suspend/Resume(1) • For Regular one: • Non-enhanced resume process requires CPU cycles for page fault processing, CPU scheduling, and freeing memory • For Enhanced one: • Pre-purge to avoid freeing memory. • Pre-fetch to avoid page fault processing
Experiment: Regular and Enhanced Suspend/Resume(2) • Suspend/resume and enhanced suspend/resume operations are disk intensive. • Need to bring a large number of pages into main memory. • Why Enhanced one use less time? • It stores application pages on disk together so that they arebroughtback faster.
Prefetching overhead • Recall: • A synthetic application with a tunable memory footprint • One type initializes a large static array of characters on the first request and then touches all those characters on subsequent requests. • Prefer Enhanced one • The other is a null request. • Prefer Regular one.
Contributing Factors • Goal: • Experimentally characterize the contributions of each element to the overall performance enhancement
Hosting Platform Agility • Goal: • Compare the speed to relieving hotspot of each mechanisim. • Testbed: • Two hosting servers • Base server: hosts an application initially. • Support server: Activate an application instance when the base server becomes overloaded. • Switch: • Nortel’s Alteon 2208 Application Switch(Layer 4 switch)
Related work • There are different approaches target different resource-sharing environments • Each application runs in dedicated virtual machine[2][10] • But previous placement works are mostly focus on deciding the number and location of the replicas. • About the agility of hosting platform: • Previous work addressing this environment [25, 13] recognized the costs of changing application placement but focused on minimizing the placement changes. [2]A. Awadallah and M. Rosenblum. The vmatrix: A network of virtual machine monitors for dynamic content distribution. In 7 th WCW, 2002. [10] X. Jiang and D. Xu. Soda: A service-on-demand architecture for application service hosting utility platforms. In 12th HPDC, 2003.
Future work • Prefetching policies: • We discuss before: • Simply prefetches an entire AppServ process into memory. • Way to improvement: • Snapshot prefetching • Prefetchesonly the part of the application that was actually accessed in memory the previous time the application was active.
Conclusions • This paper discusses the issue of efficiency of application placement in a shared web hosting platform • They improve the agility when facing a hotspot • From their experiments they shows: • Regular startup is time and resource consuming • Run-everywhere is not practical • Their placement mechanism: enhanced suspend/resume reduce lots of time and overhead.
Comments • Advantage: • This paper have lots of good experiments that we can learn. • Disadvantage: • Need to modify OS kernel to implement. • Way to think: • What if we bring this idea to Cloud computing system?