340 likes | 409 Views
This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation In Slide Show, click on the right mouse button Select “Meeting Minder” Select the “Action Items” tab
E N D
This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation • In Slide Show, click on the right mouse button • Select “Meeting Minder” • Select the “Action Items” tab • Type in action items as they come up • Click OK to dismiss this box This will automatically create an Action Item slide at the end of your presentation with your points entered. The AppLeS Project: Harvesting the Grid Francine Berman U. C. San Diego
data archives networks visualization instruments Computing Today Wireless MPPs clusters PCs Workstations
The Computational Grid • TheComputational Grid • ensemble of heterogeneous, distributed resources • emerging platform for high-performance computing • The Grid is the ultimate “supercomputer” • Grids aggregate an enormous number of resources • Grids support applications that cannot be executed at any single site Moreover, • Grids provides increased access to each kind of resource (MPPs, clusters, etc.) • Grid can be accessed at any time
Computational Grids • Computational Grids you know and love: • Your computer lab • The Internet • The PACI partnership resources • Your home computer, CS and all the resources you have access to • You are already using the Computational Grid …
Application Development Environment Globus Legion Condor NetSolve PVM Grid Infrastructure Resources Programming the Grid I • Basics • Need way to login, authenticate in different domains, transfer files, coordinate execution, etc.
Programming the Grid II • Performance-oriented programming • Need way to develop and execute performance-efficient programs • Program must achieve performance in context of • heterogeneity • dynamic behavior • multiple administrative domains This can be extremely challenging. • What approaches are most effective to achieve Grid application execution performance?
Performance-oriented Execution • Careful scheduling required to achieve application performance in parallel and distributed environments • For the Grid,adaptivity right approach to leverage deliverable resource capacities OGI UTK UCSD NWS prediction File Transfer time to SC’99 in Portland [Alan Su]
Adaptive Application Scheduling • Fundamental components: • Application-centric performance model • Provides quantifiable measure of Grid resources in terms of their potential impact on the application • Prediction of deliverable resource performance at execution time • User’s preferences and performance criteria These components form the basis for AppLeS.
What is AppLeS ? • AppLeS = Application Level Scheduler • Joint project with Rich Wolski (U. of Tenn.) • AppLeS is a methodology • Project has investigated adaptive application scheduling using dynamic information, application-specific performance models, user preferences. • AppLeS approach based on real-world scheduling. • AppLeS is software • Have developed multiple AppLeS-enabled applications which demonstrate the importance and usefulness of adaptive scheduling on the Grid.
Select resources For each feasible resource set, plana schedule For each schedule, predict application performance at execution time consider both the prediction and its qualitative attributes Deploythe “best” of the schedules wrt user’s performance criteria execution time convergence turnaround time AppLeS + application • Resource selection • Schedule Planning • Deployment Grid Infrastructure NWS Resources How Does AppLeS Work?
Sensor Interface Reporting Interface Forecaster Model 1 Model 2 Model 3 Network Weather Service (Wolski, U. Tenn.) • The NWS provides dynamic resource information for AppLeS • NWS is stand-alone system • NWS • monitors current system state • provides best forecast of resource load from multiple models
AppLeS Example: Jacobi2D • Jacobi2D is a regular, iterative SPMD application • Local communication using 5 point stencil • AppLeS written in KeLP/MPI by Wolski • Partitioning approach = strip decomposition • Goal is to minimizeexecution time
Feasible resources determined according to application-specific “desirability” metrics memory application-specific benchmark Desirability metric used to sort resources Feasible resource sets formed from initial subset of sorted desirability list Next step is to plan a schedule for each feasible resource set Scheduler will choose schedule with best predicted execution time Jacobi2D AppLeS Resource Selector
Execution time for ith strip whereload= predicted percentage of CPU time available (NWS) comm = time to send and receive messages factored by predicted BW (NWS) AppLeS uses time-balancingto determine best partition on a given set of resources Solve for P1 P2 P3 Jacobi2D Performance Model and Schedule Planning
Jacobi2D Experiments • Experiments compare • Compile-time block [HPF] partitioning • Compile-time irregular strip partitioning [no NWS, no resource selection] • Run-time strip AppLeS partitioning • Runs for different partitioning methods performed back-to-back on production systems • Average execution time recorded • Distributed UCSD/SDSC platform: Sparcs, RS6000, Alpha Farm, SP-2
Representative Jacobi 2D AppLeS experiment Adaptive scheduling leverages deliverable performance of contended system Spike occurs when a gateway between PCL and SDSC goes down Subsequent AppLeS experiments avoid slow link Jacobi2D AppLeS Experiments
Work-in-Progress: AppLeS Templates • AppLeS-enabled applications perform well in multi-user environments. • Methodology is right on target but … • AppLeS must be integrated with application –-- labor-intensive and time- intensive • You generally can’t just take an AppLeS and plug in a new application
Network Weather Service API AppLeS Template API API Application Module Performance Module Scheduling Module Deployment Module Templates • Current thrust is to develop AppLeS templates which • target structurally similar classes of applications • can be instantiated in a user-friendly timeframe • provide good application performance
Parameter Sweeps = class of applications which are structured as multiple instances of an “experiment” with distinct parameter sets Independent experiments may share input files Examples: MCell INS2D Case Study: Parameter Sweep Template Application Model
MCell= General simulator for cellular microphysiology Uses Monte Carlo diffusion and chemical reaction algorithm in 3D to simulate complex biochemical interactions of molecules Molecular environment represented as 3D space in which trajectories of ligands against cell membranes tracked Researchers plan huge runs which will make it possible to model entire cells at molecular level. 100,000s of tasks 10s of Gbytes of output data Would like to perform execution-time computational steering , data analysis and visualization Example Parameter Sweep Application: MCell
storage Cluster network links User’s hostand storage PST AppLeS • Template being developed by Henri Casanova and Graziano Obertelli • Resource Selection: • For small parameter sweeps, can dynamically select a performance efficient number of target processors [Gary Shao] • For large parameter sweeps, can assume that all resources may be used MPP Platform Model
Computation Computation Scheduling Parameter Sweeps • Contingency Scheduling: Allocation developed by dynamically generating a Gantt chart for scheduling unassigned tasks • Basic skeleton • Compute the next scheduling event • Create a Gantt Chart G • For each computation and file transfer currently underway, compute an estimate of its completion time and fill in the corresponding slots in G • Select a subset T of the tasks that have not started execution • Until each host has been assigned enough work, heuristically assign tasks to hosts, filling in slots in G • Implement schedule Network links Hosts(Cluster 1) Hosts(Cluster 2) Resources 1 2 1 2 1 2 Scheduling event Time Scheduling event G
Parameter Sweep Heuristics • Currently studying scheduling heuristics useful for parameter sweeps in Grid environments • HCW 2000 paper compares several heuristics • Min-Min[task/resource that can complete the earliest is assigned first] • Max-Min[longest of task/earliest resource times assigned first] • Sufferage[task that would “suffer” most if given a poor schedule assigned first, as computed by max - second max completion times] • Extended Sufferage[minimal completion times computed for task on each cluster, sufferage heuristic applied to these] • Workqueue[randomly chosen task assigned first] • Criteria for evaluation: • How sensitive are heuristics to location of shared input files and cost of data transmission? • How sensitive are heuristics to inaccurate performance information?
Workqueue Sufferage Max-min Min-min XSufferage Preliminary PST/MCell Results • Comparison of the performance of scheduling heuristics when it is up to 40 times more expensive to send a shared file across the network than it is to compute a task • “Extended sufferage” scheduling heuristic takes advantage of file sharing to achieve good application performance
Preliminary PST/MCell Results with “Quality of Information”
AppLeS in Context • Application Scheduling • Mars, Prophet/Gallop, MSHN, etc. • Grid Programming Environments • GrADS, Programmers’ Playground, VDCE, Nile, etc. • Scheduling Services • Globus GRAM, Legion Scheduler/Collection/Enactor • Resource and High-Throughput Schedulers • PBS, LSF, Maui Scheduler, Condor, etc. • PSEs • Nimrod, NEOS, NetSolve, Ninf • Performance Monitoring, Prediction and Steering • Autopilot, SciRun, Network Weather Service, Remos/Remulac, Cumulus • AppLeS project contributes careful and useful study of adaptive scheduling for Grid environments
New Directions • Quality of Information • Adaptive Scheduling • AppLePilot / GrADS • Resource Economies • Bushel of AppLeS • UCSD Active Web • Application Flexibility • Computational Steering • Co-allocation • Target-less computing
A B time C Quality of Information • How can we deal with imperfect or imprecise predictive information? • Quantitative measures of qualitative performance attributescan improve scheduling and execution • lifetime • cost, overhead • accuracy • penalty
SOR Experiment [Jenny Schopf] Using Quality of Information • Stochastic Scheduling:Information about the variability of the target resources can be used by scheduler to determine allocation • Resources with more performance variability assigned slightly less work • Preliminary experiments show that resulting schedule performs well and can be more predictable
Current“Quality of Information” Projects • “AppLepilot” • Combines AppLeS adaptive scheduling methodology with “fuzzy logic” decision making mechanism from Autopilot • Provides a framework inwhich to negotiate Grid services and promote application performance • Collaboration with Reed, Aydt, Wolski • Builds on the software being developed for GrADS
Performance feedback Perf problem Software components Realtime perf monitor Scheduler/ Service Negotiator Grid runtime System (Globus) Config. object program Source appli- cation whole program compiler P S E negotiation Dynamic optimizer libraries Grid Application Development System GrADS: Building an Adaptive Grid Application Development and Execution Environment • System being developed as performance economy • Performance contracts used to exchange information, bind resources, renegotiate execution environment • Joint project with large team of researchers Ken Kennedy Andrew Chien Rich Wolski Ian Foster Carl Kesselman Jack DongarraDennis GannonDan ReedLennart JohnssonFran Berman
In Summary ... • Development of AppLeS methodology, applications, templates, and models provides a careful investigation of adaptivity for emerging Grid environments • Goal of current projects is to use real-world strategies to promote performance • dynamic scheduling • qualitative and quantitative modeling • multi-agent environments • resource economies
AppLeS Corps: Fran Berman, UCSD Rich Wolski, U. Tenn Jeff Brown Henri Casanova Walfredo Cirne Holly Dail Marcio Faerman Jim Hayes Graziano Obertelli Gary Shao Otto Sievert Shava Smallen Alan Su • Thanks to NSF, NASA, NPACI, DARPA, DoD • AppLeS Home Page: http://apples.ucsd.edu