310 likes | 371 Views
This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation In Slide Show, click on the right mouse button Select “Meeting Minder†Select the “Action Items†tab
E N D
This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation • In Slide Show, click on the right mouse button • Select “Meeting Minder” • Select the “Action Items” tab • Type in action items as they come up • Click OK to dismiss this box This will automatically create an Action Item slide at the end of your presentation with your points entered. Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI
IPG = “Distributed Computer” • comprising • clusters of workstations • MPPs • remote instruments • visualization sites • data archives • for users, performance is key criteria in evaluating platform
Program Performance • Current grid programs achieve performance by • dedicating resources • careful staging of computation and data • considerable coordination • It must be possible to achieve program performance on the IPG by ordinary users on ordinary days ...
Achieving Performance • On ordinary days, many users share system resources • load and availability of resources vary • application behavior hard to predict • poor predictions make scheduling hard • Challenge: Develop application schedules which can leverage deliverable performance of system at execution time.
Whose Job Is It? • Application scheduling can be performed by many entities • Resource scheduler • Job Scheduler • Programmer or User • System Administrator • Application Scheduler
Scheduling and Performance • Goal of scheduling application is to promote application performance • Achieving application performance can conflict with achieving performance for other system components • Resource Scheduler -- perf measure is utilization • Job Scheduler -- perf measure is throughput • System Administrator -- focuses on system perf • Programmer or User -- may miss most current info • Application Scheduler -- can access most current info
Self-Centered Scheduling • Everything in the system is evaluated in terms of its impact on the application. • performance of each system component can be considered as a measurable quantity • forecasts of quantities relevant to the application can be manipulated to determine schedule • This simple paradigm forms the basis for AppLeS.
AppLeS Joint project with Rich Wolski • AppLeS= Application-Level Scheduler • Each application has its own self-centered AppLeS • Schedule achieved through • selection of potentially efficient resource sets • performance estimation of dynamic system parameters and application performance for execution time frame • adaptationto perceived dynamic conditions
AppLeS incorporates application-specific information dynamic information prediction Schedule developed to optimize user’s performance measure minimal execution time turnaround time = staging/waiting time + execution time other measures: precision, resolution, speedup, etc. NWS (Wolski) User Prefs App Perf Model Resource Selector Planner Application Act. resources/ infrastructure IPG AppLeS Architecture
Sensor Interface Reporting Interface Forecaster Network Weather Service (Wolski) • The NWS provides dynamic resource information for AppLeS • NWS • monitors current system state • provides best forecastof resource load from multiple models Model Model Model
SARA: An AppLeS-in-Progress • SARA = Synthetic Aperture Radar Atlas • application developed at JPL and SDSC • Goal: Assemble/process files for user’s desired image • thumbnail image shown to user • user selects desired bounding box within image for more detailed viewing • SARA provides detailed image in variety of formats
Focusing in with SARA Thumbnail image Boundingbox
Network shared by variable number of users Data Server Computation servers and data servers are logical entities, not necessarily different nodes Compute Server Data Server Data Server Computation assumed to be done at compute servers Simple Sara • Focuses on obtaining remote data quickly • Code developed by Alan Su
Simple SARA AppLeS • Focus on resource selection problem: Which site can deliver data the fastest? • Data for image accessed over shared networks • Data sets 1.4 - 3 megabytes, representative of SARA file sizes • Servers used for experiments • lolland.cc.gatech.edu • sitar.cs.uiuc • perigee.chpc.utah.edu • mead2.uwashington.edu • spin.cacr.caltech.edu via vBNS via general Internet
Which is “Closer”? • Sites on the east coast or sites on the west coast? • Sites on the vBNS or sites on the general Internet? • Consistently the same site or different sites at different times?
Which is “Closer”? • Sites on the east coast or sites on the west coast? • Sites on the vBNS or sites on the general Internet? • Consistently the same site or different sites at different times? Depends a lot on traffic ...
Preliminary Results • Experiment with larger data set (3 Mbytes) • During this time-frame, general Internet provides data mostly faster than vBNS
9/21/98 Experiments • Clinton Grand Jury webcast commenced at iteration 62
More Preliminary Results • Experiment with smaller data set (1.4 Mbytes) • During this time frame, east coast sites provide data mostly faster than west coast sites
Distributed Data Applications • SARA representative of larger class of distributed data applications • Simple SARA template being extended to accommodate • replicated data sources • multiple files per image • parallel data acquisition • intermediate compute sites • web interface, etc.
Move the computation or move the data? Data Server Data Server Data Server Data Server Comp. Server Data servers may access the same storage media. How long will data access take when data is needed? Client Comp. Server . . . Comp. Server Computation, data servers may “live” at the same nodes SARA AppLeS -- Phase 2 Client, servers are “logical” nodes, which servers should the client use?
A Bushel of AppLeS … almost • During the first “phase” of the project, we’ve focused on getting experience building AppLeS • Jacobi2D, DOT, SRB, Simple SARA, Genetic Algorithm, Tomography, ... • Using this experience, we are beginning to build AppLeS “templates”/tools for • master/slave applications • parameter sweep applications • distributed data applications • proudly parallel applications, etc. • What have we learned ...
Lessons Learned from AppLeS • Dynamic information is critical
Lessons Learned from AppLeS • Program execution and parameters may exhibit a range of performance
Lessons Learned from AppLeS • Knowing something about performance predictions can improve scheduling
Lessons Learned from AppLeS • Performance of scheduling policy sensitive to application, data, and system characteristics
A First IPG AppLeS • Focus on class of parameter sweep applications • Building AppLeS template for INS2D that can be used with other applications from class • AppLeS INS2D scheduler • first phase focuses on interactive clusters • second phase will target clusters and batch-scheduled platforms • goal is to minimize turnaround time
Parameter Sweep AppLeS Architecture • Being developed by Dmitrii Zagorodnov • AppLeS schedules work on interactive resources • AppLeS tuned to leverage underlying resource management system Act Exp API App-specificcase gen. Sched. Exp Exp Act S AppLe Act Resources
INS2D AppLeS Project Goals • Complete designand deployment of INS2D AppLeS for interactive cluster • focus on socket design for first phase • Conduct experiments to assess AppLeS performance on interactive cluster and to compare with batch system performance • Expand INS2D AppLeS to target both batch and interactive systems • target to evolving IPG resource management system
AppLeS and the IPG Integration of multiple grid constituencies architectural models which support multiple constituencies automation of program execution Usability, Integration development of basic IPG infrastructure Performance “grid-aware” programming Short-term Medium-term Long-term Integration of schedulers and other tools, performance interfaces Application scheduling Resource scheduling Throughput scheduling Multi-scheduling Resource economy You are here
Thanks to NSF, NPACI, Darpa, DoD, NASA AppLeS Corps: Francine Berman Rich Wolski Walfredo Cirne Marcio Faerman Jaime Frey Jim Hayes Graziano Obertelli AppLeS Home Page:http://www-cse.ucsd.edu/groups/hpcl/apples.html Jenny Schopf Gary Shao Neil Spring Shava Smallen Alan Su Dmitrii Zagorodnov Project Information