1 / 12

MOLAR: MO dular L inux and A daptive R untime support

MOLAR: MO dular L inux and A daptive R untime support. Project Team David Bernholdt 1 , Christian Engelmann 1 , Stephen L. Scott 1 , Jeffrey Vetter 1 Arthur B. Maccabe 2 , Patrick G. Bridges 2 Frank Mueller 3 Ponnuswany Sadayappan 4 Chokchai Leangsuksun 5 1 Oak Ridge National Laboratory

gail-mccoy
Download Presentation

MOLAR: MO dular L inux and A daptive R untime support

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MOLAR: MOdular Linux and Adaptive Runtime support Project Team David Bernholdt1, Christian Engelmann1, Stephen L. Scott1, Jeffrey Vetter1 Arthur B. Maccabe2, Patrick G. Bridges2 Frank Mueller3 Ponnuswany Sadayappan4 Chokchai Leangsuksun5 1Oak Ridge National Laboratory 2University of New Mexico 3North Carolina State University 4Ohio State University 5Louisiana Tech University Briefing at: Scalable Systems Software meeting Argonne National Laboratory - August 26, 2004

  2. Research Plan • Create a modular and configurable Linux system that allows customized changes based on the requirements of the applications, runtime systems, and cluster management software. • Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. • Advance computer RAS management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. • Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions.

  3. MOLAR map

  4. MOLAR map MOLAR: Modular Linux and Adaptive Runtime support MOLAR: Modular Linux and Adaptive Runtime support HEC Linux OS: modular, custom, light HEC Linux OS: modular, custom, light - - weight weight Kernel design Kernel design [UNM, ORNL, LLNL] [UNM, ORNL] RAS: reliability, availability, serviceability RAS: reliability, availability, serviceability Monitoring Monitoring Extend/adapt Extend/adapt Root cause analysis Root cause analysis High availability High availability Message logging Process state saving runtime/OS runtime/OS [LaTech, ORNL, NCSU] ORNL LaTech [ [ LaTech , ORNL] [NCSU] [LLNL] , ] ORNL LaTech [ORNL, OSU] [ORNL, OSU] Programming models Programming models Testbeds Testbeds Evaluation Evaluation Provided Provided [ORNL, OSU] [ORNL, OSU] [Cray, [Cray, ORNL] ORNL]

  5. RAS for Scientific and Engineering Applications • High mean time between interrupts (MTBI) for hardware, system software, and storage devices. • High mean time between errors/failures that affect users. • Recovery is automatic w/o human intervention. • Minimal work loss due to recovery process. Computation – Storage – Network

  6. Case for RAS in HEC • Today’s systems need to reboot to recover. • Entire system often down for any maintenance or repair. • Compute nodes sit idle if their head (service) node is down. • Availability and MTBI typically decreases as system grows. • The “hidden” costs of failures • researchers’ lost work-in-progress • researchers on hold • additional system staff • checkpoint & restart time • Why do we accept such significant system outages due to failures, maintenance or repair? • With the expected investment into HEC we simply cannot afford low availability! • We need to drastically increase the availability of HEC computing resources now!

  7. High-availability in Industry • Industry has shown for years that 99.999% (five nines) high-availability is feasible for computing services. • Used in corporate web servers, distributed data bases, business accounting and stock exchange services. • OS-level high-availability has not been a priority in the past. • Implementation involves complex algorithms. • Development and distribution licensing issues exist. • Most solutions are proprietary and do not perform well. • HA-OSCAR first freely available open source HA cluster implementation. • If we don’t step-up and do it as an Open Source proof-of-concept implementation and set the standard no one will.

  8. Availibility by the Nines* • Service measured by “9’s of availability” • 90% has one 9, 99% has two 9s, etc… • Good HA package + substandard hardware = up to 3 nines • Enterprise-class hardware + stable Linux kernel = 5+ nines *Highly-Affordable High Availability by Alan Robertson Linux Magazine, November 2003 http://www.linux-mag.com/2003-11/availability_01.html

  9. Federated System Management

  10. Active/Hot-Standby: Single head node. Idle standby head node(s). Backup to shared storage. Service interruption for the time of the fail-over. Rollback to backup. Simple checkpoint/restart. Service interruption for the time of restore-over. Active/Active: Many active head nodes. Work load distribution. Symmetric replication between head nodes. Continuous service. Always up-to-date. Complex distributed control algorithms. No restore-over necessary High-availability Methods

  11. Active/Hot-Standby: HA-OSCAR with active/ hot-standby head node. Cluster system software. No support for multiple active/active head nodes. No middleware support. No support for compute nodes. Active/Active: HARNESS with symmetric distributed virtual machine. Heterogeneous adaptable distributed middleware. No system level support. High-availability Technology • System-level data replication and distributed control service needed for active/active head node solution. • Reconfigurable framework similar to HARNESS needed to adapt to system properties and application needs.

  12. Modular RAS Framework forTerascale Computing High-Available Service Nodes: Service Node Service Node Service Node To Compute Nodes Reliable Services: Job Sched. User Mgmt. Etc. Virtual Synchrony: Distributed Control Service Symmetric Replication: Data Replication Service Reliable Server Groups: Group Communication Service Communication Methods: TCP/IP Shared Memory Etc.

More Related