1 / 30

Exascale Runtime Systems Summit Plan and Outcomes

Exascale Runtime Systems Summit Plan and Outcomes. Sonia R. Sachs 04/09/2014 AGU, Washington D.C. Summit Goals. Summit rule: Participants should not promote their current research agenda. Generate a roadmap for achieving a unified runtime systems architecture for Exascale systems

basil
Download Presentation

Exascale Runtime Systems Summit Plan and Outcomes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ExascaleRuntime Systems Summit Plan and Outcomes Sonia R. Sachs 04/09/2014 AGU, Washington D.C.

  2. Summit Goals Summit rule: Participants should not promote their current research agenda • Generate a roadmap for achieving a unified runtime systems architecture for Exascale systems • Reach consensus on the top six challenges and solutions for them. • Agree on a comprehensive set of questions that must be answered in order to achieve such architecture • Current known answers to posed questions • Generate a roadmap for a research program on runtime systems • Consistent with achieving a unified runtime systems architecture • Discuss future workshop • Prepare for writing a report

  3. Plan to create the Unified Runtime Systems Architecture Roadmap We need to leverage current investments in runtime systems: : OCR, HPX, ARTS, SEEC, GVR runtime and runtimes to support advance/extended MPI and Global Arrays. • Agree on top six (6) challenges and solutions (1 hour) • Strawman set of challenges: slide 3 • For each challenge: discuss current state-of-the-art and how is challenge addressed in existing runtime systems to be leveraged? (1-2 hours) • Agree on a set of top questions to be answered (1 hours) • Strawman set of questions: slide 4 • For each question: discuss currently known answers and how existing runtime systems answer it? (1-2 hours) • Vision (1-2 hours) • what are major components? • programming interfaces and interfaces to the OS • How do we measure success

  4. Strawman set of challenges • What is the current known key abstractions? Which key abstractions are currently supported by runtime systems to be leveraged? • For each of these challenges: What is the current state-of-the-art on such runtime support? How is this done in runtime systems to be leveraged? • Key abstractions need to be identified • and jointly supported by the runtime system, compilers, and hardware architecture. • Runtime support for lightweight tasks and their coordination • capable of dealing with system heterogeneity and with end-to-end asynchrony. • Runtime support for locality-aware, dynamic task scheduling • Enabling continuously optimizing code or data movement. • Need for task coordination and synchronization primitives • Runtime support for dynamic load balancing • To deal with load imbalances created by a large number of sources for non-uniform execution rates

  5. Strawman Set of Questions • Runtime system software architecture • What would the principal components be? What are the semantics of these components? What is the role of the different execution models? • What are the mechanisms for managing processes/threads/tasks and data? • What policies and/or mechanisms will your runtime use to schedule code and place data? • How does the runtime dynamically adapt the schedule and placement so that metrics of code-data affinity, power consumption, migration cost and resiliency are improved? • How does the runtime manage resources (compute, memory, power, bandwidth) to meet a power, energy and performance objective? • How does the runtime scale? • What is the role of a global address space or a global name space? • What programming models will be supported by the runtime architecture? • What OS support should be assumed? • Community buy-in: • How do we achieve community buy-in to an envisioned runtime architecture and semantics? • We need a process to continuously evaluate refine the envisioned runtime architecture and semantics while keeping focus on achieving an Exascale runtime system. What should this process be?

  6. Current Runtime Investments • 2012 X-Stack program [2]: application-driven runtime systems support: • maximizing concurrency efficiency, • dealing with asynchrony of computation and communication, • exploiting data locality, • minimizing data movement, • managing faults, • support for heterogeneous computing elements, • semantics for programmability, • support for novel programming models • Runtime systems to be leveraged • OCR, HPX, ARTS, SEEC, GVR runtime and runtimes to support advance/extended MPI and Global Arrays

  7. Curent Runtime Investments • Mapping important questions to projects: • https://www.xstackwiki.com/index.php/Runtimes_(application-facing)

  8. Current Runtime Investments • 2013 OS/R Program [4]: systems-driven mechanisms described in the OS/R report: • thread management, • low-level communication services, • resource management, • different runtime service components tightly connected to deal with challenges: • resilience, • asynchronous computations, • and locality of computation. • Runtime systems approaches to be leveraged in ARGO, HOBBES, and X-ARCC projects. • Mapping important questions to projects: • Not yet available • To be completed after upcoming OS/R semi-annual review

  9. Summit Outcome: top challenges • Exploitation of runtime information (introspection), feedback control of performance data, managing performance data • Resource allocation • Scheduling & workflow orchestration • Complexity/optimization/tuning • Portability • Synchronization: event-driven, mutual exclusion, barriers, phasers • Computing everywhere • Name space: both data and computation, includes location management • Support tools • Support for migratable computational units • Hardware support, tight-coupling • Expose some of runtime elements to system managers • Growing gap between communication and computation • Scalability & Starvation: dominant parameters to optimize, critical path management • Locality and data movement: need terminology for inter and intra • Power is critical • Overhead • Resilience: scalability and power problems exacerbates • Load balancing: contention, hot spots, • Heterogeneity: performance irregularities, static and dynamic, heterogeneity in storage/memory • In-situ data analysis and mgmt: new dimension of interoperability

  10. Summit Outcome: top challenge classes • Scalability & Starvation: dominant parameters to optimize, critical path management. Name space: both data and computation, includes location management. Complexity/optimization/tuning • Portability and interoperability. In-situ data analysis and mgmt: new dimension of interoperability: runtime systems composability. • Resource allocation. Scheduling & workflow orchestration. Cross jobs (apps) scheduling: OS role. Focus scheduling for one job. Support for migratable computational units. Hardware support, tight-coupling. Expose some of runtime elements to system managers • Usability. Support tools. • Locality and data movement: need terminology for inter and intra, dynamic decisions, handling variability, conflict of optimizing locality, data movement and costs of dynamic scheduling- questions of policy. Synchronization: event-driven, mutual exclusion, barriers, phasers. Overhead. Growing gap between communication and computation • Resilience: scalability and power problems exacerbates. • Variability. Static and Dynamic. Power management. Load balancing: contention, hot spots. Exploitation of runtime information (introspection), feedback control of perfomance data, managing performance data • Heterogeneity: performance irregularities, static and dynamic, heterogeneity in storage/memory. Computing everywhere.

  11. Summit Outcome: Challenge Problems • For each challenge problem, we want to give examples in the context of challenge problems • Viveksuggested one multi-physics challenge problem. • Homework: identify and describe challenge problems

  12. Summit Outcome: Key Abstractions • Unit of computation • attributes: locality, synchronization, resilience, critical path • Naming: data, computation, objects that combine both (active objects) • Global side-effects: programming model abstraction? • Execution Model • Machine Model, Resources: memory, computation, storage, network, … • Locality and affinity, hierarchy • Control State: collective of info distributed across the global system that determines the next state of the machine. Distributed snapshot of the system. Logical abstraction, how to reason about the system. • Enclave • Scheduler: local scheduler of a single execution stream • Execution Stream: something that has hardware associated with • Communication data transfer • Concurrency patterns, synchronization • Resilience, detection, fault model

  13. Summit Outcome: runtime services • Runtime services • Concurrency control: isolation, atomics (it gets into scheduling?) • Location and Affinity/Locality services: map to some things that are mentioned above. Provides information and does binding. • Express error checking/detection and recovery. Allows to specify resilience properties. Both to computation and data and hardware resources. • Load balancing. Scheduling. • OS requests services from the runtime: give me back resources that I gave you, tell runtime to graceful degradation/shutdown • Services can make requests to other services, e.g., tools • Service Attributes • How the service will be provided? • Expected resilience • Expected resources usage • Persistence of memory • Locality attributes • Runtime Services • Schedule and execute threads/tasks/work unit, including code generation • Resource allocation (give me resources dynamically, as needed, release resources): including networks. heterogeneity • Introspection services: info about power, performance, heterogeneity. Variability. • Creation, translation, isolation, security, release: name space, virtualization • Communication of data and code, including synchronization (event-oriented). Migration services. Move work, move data. Is not separate from the communication services, it is composed with.

  14. Summit Outcome: runtime services • Homework: • Refine key abstractions and their definition • Using refined key abstractions, define runtime services • Create a matrix for mapping on what current investments (including Charm++ runtime) provide these services and a brief description on how the services are provided. • Deadlines • Initial draft to be distributed to summit participants: April 22 • Final draft including comments/suggestions from summit participants: April 309 • Homework Team: • Wilf, Vivek, Kathy, Vijay

  15. Summit Outcome: community buy-in • Presentation of solutions by Wilf (slides 15- 20) • Discussions of the presented ideas • More questions than we had time for: • We will post Wilf’s presentation in the xstack wiki • Wilf will present these again at the X-Stack PI meeting • Summit participants are encouraged to send Wilf and I comments/questions/suggestions • We will encourage X-Stack meeting participants to give us comments/questions/suggestions

  16. Community buy-in Ecosystem Creation: How do we achieve community buy-in to an envisioned runtime architecture and semantics? Process: We need a process to continuously evaluate refine the envisioned runtime architecture and semantics while keeping focus on achieving an Exascale runtime system. What should this process be?

  17. Ecosystem Creation Establish an open, transparent environment where the solution is not pre-determined Provide an organic process for community decision-making, ensuring that the best solution wins Avoid a single player or clique dominating Lower the barrier to participation by providing stable, reliable releases of candidate solutions to a broad audience

  18. Process Build an independent, open-source foundation that ensures the different projects can be continuously available, evolved, and supported. The different projects will evolve based on the contributions made. As solutions demonstrate their superiority, they will attract more contributions as well as consensus. The community will organically migrate to the superior solution. DOE can continuously view progress and help fund projects to cover any critical shortfalls.

  19. Who is in the Community Exascale Computing Research Community (us) High Performance Computing User Community (current users) Academic Community (future users) Application Development Community (scientists and engineers) Software Development Community Hardware Vendors

  20. Community Services Project Team Infrastructure - e.g. source code control, tooling, debuggers, collaboration/communication Release Engineering Technical Support IP management Education, instruction and training Community Development

  21. Build on Experience – Community 2.0 Learn from the best Eclipse, Apache, Mozilla Building the community/ecosystem is top priority Support multiple projects and give them autonomy Support Regular Community Interaction Long-term commitment to quality through education and process Avoid the pitfalls Commercial control of the purse strings leads to community breakdown Get to community support quickly and maintain community control

  22. Summit Outcome: comprehensive set of questions • Runtime system software architecture • What are the major services provided by this architecture? • What is the strategy that the runtime system has to embody? What is the role of the different execution models? • What would the principal components be? What are the semantics of these components? • What are the mechanisms for managing and scheduling units of computation and data? • How does the runtime dynamically adapt the schedule and placement so that metrics of code-data affinity, power consumption, migration cost and resiliency are improved? • How does the runtime manage resources (compute, memory, power, bandwidth) to meet a power, energy and performance objective? How are resources exposed? • How does the runtime scale? How does the runtime ensures its scalability? • What is the role of name/address spaces? Are they global or not? What is their scope? • What programming models will be supported by the runtime architecture? • How will composability be enabled? • What OS support is assumed? What can the OS ask/expect from runtimes? • What can compilers ask/expect of runtimes? What runtimes can ask/expect of compilers? • Just in time compilation: what is the runtime support needed? • What hardware support is assumed, can be exploited, can helpful? What is the machine model assumed? • What is the cost model assumed (energy, performance, resilience)? • How does the runtime system enables use of application or system info for resilience? In general, how does the runtime system uses information? • What tools expect from runtimes, and what runtimes expect from tools?

  23. Summit Outcome: runtime systems major components • Location managers • Prefetcher for explicit memory mgmt • Lightweight migratable threads • Introspection management. Monitoring/tools interface • Reliable data store. I/O is embedded here. • Global termination detection • Event and synchronization framework • Failure detection • Failure recovery • Adaptive controller • Interoperability (with in-situ analysis, visualization, etc.) • Composability manager • Unit of computation manager and scheduling • Name service for everything that one wants to virtualize. Address allocation/translation. • Data distribution and redistribution • Locality management • Power management • Communication interfaces and/or infrastructure. • Network I/O • Active storage (compute in storage). I/O, locality • Load balancers

  24. Summit Outcome: component interfaces • Interfaces should: • Support hybrid programming models • Interface to compilers and OS • Ensure progress guarantees (formal methods) • Interfaces may be distributed/centralized • Homework: • Refine major components and their definition • For each component, describe its interfaces • Team: • Sanjay, Vivek, Thomas, Costin, Ron (composability), Vijay, Milind • Deadlines: • Initial draft to distribute to summit participants: May 12 • Final draft to be presented at X-Stack meeting: May 27

  25. Summit Outcome: Runtime Systems Vision Statement Enable efficient applications execution on exascale hardware with runtime systems that address the need for massive hierarchical concurrency, data movement minimization, failure tolerance, adaptation to performance variability, and management of energy and system resources.

  26. Summit Outcome: how do we measure success • Micro benchmarksand mini-apps • How to measure? Metrics: • Time, work, energy • Idleness • Combined task scheduling and communication metric • Flexibility of the system (doing new things quickly) • Overhead: not orthogonal to starvation, lower bound on the thread that can be explored, reduces concurrency. Sanjay can explain how. • Need to engage performance tools. • What is the key bottleneck? • Ease of programming/productivity metric: what could that be? • Proposed high-level criteria • Efficiency, scalability, productivity • Reliability, power management • Move from static control to dynamic control; introspection • Move programming burden from programmer to system • Heterogeneity • Strong scaling and greater generality • SLOWER: starvation, latency, overhead, waiting, energy, resilience. • How well is energy conserved? • How do we measure runtime ability to handle heterogeneity and variability? • How do we measure resilience? • How do we measure the ability to handle load imbalances? • How do we measure scalability?

  27. Plan to create Roadmap for Runtime Research • New runtime research • Dynamic, interactive steering • Energy consumption/ power management • Make the machine more useable by sys admin • Learning runtime with observations • How to deal with variability • Interoperability of runtime with workflow and job scheduler and in-situ analytics: model s of use • Integration into a open-source community runtime: Modelado • Testing/validation for ASCR/NNSA apps running at scale • New runtime research • runtime mechanisms to extract parallelism • Proof that dynamic adaptive runtime systems are/not needed due to variability: simulation modeling? When can we get this done? Need trends, need to know bounds. • Metrics crosscutting with existing and new research • Programming interfaces for programmer engagement • Composability management • Integration with IO, network, storage • Debugging, debugging, debugging • Compute everywhere: adds challenges for debugging • Distributed algorithms for scheduling that scales • Workflow usage models • Improved micro-benchmarks, mini-apps that exploit runtime systems attributes

  28. Plan to create Roadmap for Runtime Research • P1: • 1. evaluation of larger proxy apps with different runtimes, exercising composability • 2. refining hardware dependencies list • 4. demonstrate benefits of intermediate representation • 5. demonstrate runtime mechanisms to extract parallelism on exascale context • 3. demonstrate that runtimes can scale up to exascale • 10. validation of Models and evaluation methodologies • 6. validation of Model for compute everywhere • 7. validation Model for debuggability • 8. demonstrate on a multi-node context, • 9. demonstrate explicit management of memory, or the other way around • Major milestones and time-line • Should follow Hardware timeline • P0: petascale node by 2017 • P1: exascale node by 2019 • P2: exascale cabinet prototype by 2022 • Requirement: • demonstrated the benefits of dynamic adaptive runtime for regular apps (2014-1015) • P0: • 1. evaluation of proxy apps with different runtimes, exercising composability • 2. identifying hardware dependencies and pruning the list • 3. demonstrate that runtimes can scale up to petascale • 4. intermediate representation identified/specified • 5. Models and evaluation methodologies • 6. Model for compute everywhere • 7. Model for debuggability • 8. demonstrate on a multi-note context • 9. demonstrate explicit management of memory, or the other way around. If it can be done by 2017.

  29. Plan to create Roadmap for Runtime Research • Homework: • Refine the milestones for P0, P1, and P2 • Team: Thomas, Wilf, Ron, Costin • Deadlines: • First draft to be distributed to the summit participants: May 12 • Final draft to be shared at the X-Stack PI meeting: May 27 • P2: • 1. evaluation of apps running at scale with different runtimes • 2. refining hardware dependencies list • 4. demonstrate benefits of intermediate representation • 5. demonstrate runtime mechanisms to extract parellelism on exascale context • 3. demonstrate that runtimes can scale up to exascale • 10. validation of Models and evaluation methodologies at scale • 6. validation of Model for compute everywhere at scale • 7. validation Model for debuggability at scale • … • …

  30. Plan for Writing the Report • Proposed outline • Top Challenges • Comprehensive set of questions to be answered • State-of-the-art: How challenges and questions are addressed in existing runtime systems that we want to leverage? • Towards a Unified Runtime Systems Architecture • Components • Interfaces • Conclusion • Recommendations towards jointly evolving vision of unified runtime systems architecture • Recommendations on the roadmap to Runtime Systems Research • Recommendations regarding workshops • Schedule. • First draft: May 27 (target 10 pages) • Present at X-Stack meeting and collect feedback from X-Stack community • Coordination calls in June and July. • Final draft: July 31 • Assignments plan • Send me proposed changes to the outline and volunteer for report section by April 16 • I will distribute assignments by April 22

More Related