430 likes | 528 Views
Exploring the Design Tradeoffs for Exascale System Services through Simulation. Ke Wang USRC Group, Los Alamos National Laboratory Summer Internship Results Collaborated with DataSys Lab, IIT August 16 th , 2012. Acknowledgements. DataSys Laboratory Dr. Ioan Raicu
E N D
Exploring the Design Tradeoffs for Exascale System Services through Simulation Ke Wang USRC Group, Los Alamos National Laboratory Summer Internship Results Collaborated with DataSys Lab, IIT August 16th, 2012
Acknowledgements • DataSys Laboratory • Dr. IoanRaicu • Michael Lang, USRC leader • AbhishekKulkarni, Ph.D student of Indiana University • Poster Submission: • Ke Wang, AbhishekKulkarni, Michael Lang, IoanRaicu, Andrew Lumsdaine, “Exploring the Design Tradeoffs for Exascale System Services through Simulation”, under review at SC12 Exploring the Design Tradeoffs for Exascale System Services through Simulation
Outline • Introduction & Motivation • Long-Term Aims and Contributions • System Services Taxonomy • Peer-to-Peer System Simulators • Simulating System Services • Related Work • Contributions • Future Work & Conclusion Exploring the Design Tradeoffs for Exascale System Services through Simulation
Outline • Introduction & Motivation • System Services Taxonomy • Peer-to-Peer System Simulators • Simulating System Services • Related Work • Contributions • Future Work & Conclusion Exploring the Design Tradeoffs for Exascale System Services through Simulation
Distributed System Services • Operating System: a service provider offers basic services, such as Program development, Access to I/O devices, Controlled access to files, System access and Program execution • Generalized distributed system services involve many servers coordinated with each other to offer different services to a lot of clients • Typical services: key-value store, job scheduler, file servers, application job launch • Key Issues: Scalability, Dynamicity, Resiliency, Consistency, Fault Tolerance Exploring the Design Tradeoffs for Exascale System Services through Simulation
Exascale Computing Top500 Performance Development, http://top500.org/static/lists/2011/11/TOP500_201111_Poster.pdf 6 Today (June 18, 2012): 16 Petaflop • O(100K) nodes (100X in the last 10 years) • O(1M) cores (1000X in the last 10 years) Near future (~2018): Exaflop Computing • ~1M nodes (10X) • ~1B processor-cores/threads (1000X)
Major Challenges of Exascale Computing Exploring the Design Tradeoffs for Exascale System Services through Simulation 7 Energy and Power • 7.89MW (Top 1 Supercomputer) • 20MW limitation Memory and Storage • Retain data at high enough capacities • Access data at high enough rates • Support the desired computational rate • Fit within acceptable power envelope Concurrency and Locality • Accelerators, GPUs, MIC • Programmability • Minimizing data movement Resiliency • MTTF decreases, MPI suffers
Current HPC System Service Lack of decomposition in detail Centralized server with at most a single fail-over, for example Slurm (slurmctld, slurmd) Not clear about the scalability of different server topologies (centralized, hierarchical, distributed), either the costs of different resiliency and consistency models.
Outline • Introduction & Motivation • Long-Term Aims and Contributions • System Services Taxonomy • Peer-to-Peer System Simulators • Simulating System Services • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Long-Term Aims • Develop a simulator capable of simulating generic system services in supporting up to 1M nodes • Compare the scalability of different server topology with or without churn property • The costs of different resiliency models (fail over, replication) to different server topology under different failure rate • The costs of different consistency models (strong/weak consistency) to different server topology SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
This Work’s Contributions • Deconstruct services into their most basic components, and provide a general taxonomy to classify existing system services in terms of server architecture • Investigate and compare different existing peer-to-peer simulators • Simulate these service architectures at scale with millions of clients served by thousands of servers • Estimate basic parameters such as memory consumption analytically, and complex parameters such as client-perceived throughput, server throughput, and overall system efficiency • Demonstrate how churn property affects the performance and efficiency of the system under different distributed service architectures SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • System Services Taxonomy • Peer-to-Peer System Simulators • Simulating System Services • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
System Service Blocks • We deconstruct services into their basic blocks to understand the design tradeoffs for exascale system services • The taxonomy proposed is still being defined and modified as we investigate more HPC services • Components • Service Model:define overall behavior and constraints • describe high-level functionality, its architecture and roles of entities • ACID property, CAP property • Data Model: define the distribution of persistent data • Centralized, Distributed with different levels of replication • Replication: partitioned(no replication), mirrored(full replication), overlapped (partial replication) • Network Model: dictates how the components are connected • Structured overlay: rings, binomial, k-ary, radix-trees, complete/binomial graphs • Unstructured overlay: random graph • Completed membership list (fully connected) vs Partial membership list (binomial graphs) • Failure Model: how the servers handle failures • Complete mirroring, triple modular redundancy • Consistency Model: depends on data model and level of replication • Strong, weak or eventual consistency SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Services Architecture SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
SimMatrix Architecture Submit tasks Submit tasks Client Client Dispatcher Arbitrary Node Figure 1: Simulation architectures; the left part is the centralized one with a single dispatcher connecting all nodes, the right part is the homogeneous distributed topology with each node having the same number of cores and neighbors SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Simulations • Continuous time simulations • Abandoned the idea of creating a separate thread per simulated node: we found that on our 48-core system with 256GB of memory, we were limited to 32K threads • Discrete event simulations • The only viable approach (today) to explore scheduling techniques at exascales (millions of nodes and billions of cores) • Created a unique object per simulated node, and converted any behavior to an event SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • SimMatrix Architecture • Implementation • Evaluation • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
At the Heart of SimMatrixGlobal Event Queue • All events are inserted to the queue, sorted based on the occurrence time ascending • Handle the first event, advance the simulation time and update the event queue • Implemented as red-black tree based “TreeSet” in Java, which ensures Θ(log𝑛 ) time for insert & remove Figure 2: Event State Transition Diagram SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Simulator Features • Node load information • Nested hash maps provides extremely fast performance at large scales • Dynamic Task Submission • Aims to reduce the memory foot-print • Dynamic Poll interval • Exponential backoff to reduce the number of messages and increase speed of simulation SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Implementation • SimMatrix is developed in JAVA • Sun 64-bit JDK version 1.6.0_22 • 1500 lines of code • Code accessible at: • http://datasys.cs.iit.edu/projects/SimMatrix/index.html • SimMatrix has no other dependencies SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • SimMatrix Architecture • Implementation • Evaluation • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Experiment Environment • Fusion system: • fusion.cs.iit.edu • 48 AMD Opteron cores at 1.93GHz • 256GB RAM • 64-bit Linux kernel 2.6.31.5 • Sun 64-bit JDK version 1.6.0_22 SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Metrics • Throughput • Number of tasks finished per second. Calculated as total-number-of-tasks/simulation-time. • Efficiency • The ratio between the ideal simulation time of completing a given workload and the real simulation time. The ideal simulation time is calculated by taking the average task execution time multiplied by the number of tasks per core. • Load Balancing • We adopted the coefficient variance of the number of tasks finished by each node as a measure the load balancing. The smaller the coefficient variance, the better the load balancing is. It is calculated as the standard-deviation/average in terms of number of tasks finished by each node. • Scalability • Total number of tasks, number of nodes, and number of cores supported. SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Workloads • Synthetic workloads: • Uniform distributions with different average task lengths, such as 10s (ave_10), 100s (ave_100), 1000s (ave_1000), 5000s (ave_5000), 10000s (ave_10000), and 100000s (ave_100000); also all tasks of 1 sec each (all_1) • Realistic application workloads: • General MTC workload from 2008-2009 trace of 173M tasks; average task length 64±486s (mtc_64), using Gamma Distribution SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Validation Validate SimMatrix against the state-of-the-art MTC systems (e.g. Falkon), to ensure that the simulator can accurately predict the performance of current petascale systems. SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Comparing Work Stealing to Falkon’s Naïve Distributed Scheduler Fine grained workloads: • 2% 99.3% efficiency increase Coarse grained workloads: • 99% 99.999% efficiency increase SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Scalability1M Nodes and 10B tasks Memory consumption • <13 KB/task • <200 GB CPU Time • <90 us/task • <260 hours SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Scalability1M Nodes and 10B tasks Efficiency • 90%+ Co-variance • <0.06 • Load imbalance of <600 tasks from 10K tasks per node SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Work Stealing ParametersNumber of Tasks to Steal Stealing half of neighbor’s work is best strategy!
Work Stealing ParametersNumber of Neighbors (Static) Requires linear number of neighbors for good performance!
Work Stealing ParametersNumber of Neighbors (Dynamic Random) An increasing number of neighbors are needed for 90%+ efficiency, with the largest scales requiring square root neighbors (e.g. 1K neighbors from 1M nodes!
Work Stealing ParametersOptimal Parameters Generality The same optimal parameters achieve 90%+ efficiency across many different workloads!
Work StealingThroughput Centralized scheduling has severe bottleneck, especially for workload with fine granularity. Distributed scheduling has great scalability, for workload with coarse granularity, there is no obvious upper bound SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Load Balancing Visualization1024 Nodes and Ave_5000 Workload Starvation Good Load Balancing Good Load Balancing Starvation Quarter Static Neighbors Square Root Dynamic Neighbors Square Root Static Neighbors 2 Static Neighbors SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Summary Plot for Distributed Scheduling Steady state utilization is ~100% at exascales SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • SimMatrix Architecture • Implementation • Evaluation • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Related Work • Real Job Scheduling Systems: • Condor (University of Wisconsin), Bradley et al, 2012 • PBS (NASA Ames) , Corbattoet al, 2012 • LSF Batch (Platform Computing of Toronto), 2011 • Falkon (University of Chicago), Raicu et al, SC07 • Job Scheduling System Simulators: • simJava(University of Edinburgh), Wheeler et al, 2004 • GridSim(University of Melbourne, Australia), Buyyaet al, 2010 • Load Balancing: • Neighborhood averaging scheme, Sinhaet al, 1993 • Charm++ (UIUC), Zhenget al, 2011 • Scalable Work Stealing • Dinan et al, SC09 • Blumofe et al, Scheduling multithreaded computations by work stealing, 1994 SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • SimMatrix Architecture • Implementation • Evaluation • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Contributions • Designed, Analyzed, and Implemented a discrete-event simulator (SimMatrix) enabling the study of MTC workloads at exascales • Identified work stealing as a viable technique to achieve load balance at exascales • Provided evidence that work stealing is scalableby finding optimal parameters affecting the performance of work stealing • Number of tasks to steal is half • Dynamic random neighbors strategy is required • There must be a squared root number of neighbors SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Outline • Introduction & Motivation • Long-Term Aims and Contributions • SimMatrix Architecture • Implementation • Evaluation • Related Work • Contributions • Future Work & Conclusion SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Future Work • Explore work stealing for manycore processors with 1000 cores • Enhancing the network topology model to allow complex networks • Insight from SimMatrix will be used to develop MATRIX, a distributed task execution fabric • MATRIX will employ work stealing for distributed load balancing • MATRIX will be integrated with other projects, such as Swift (a data-flow parallel programming systems) and FusionFS(a distributed file systems) SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
Conclusion • Exascale systems bring great opportunities in unraveling of significant scientific mysteries • There are significant challenges to achieve exascales, such as concurrency, resilience, I/O and memory, heterogeneity, and energy • MTC requires a highly scalable and distributed task/job management system at large scales • Distributed scheduling is likely an efficient way to achieve load balancing, leading to high job throughput and system utilization • Work stealing is a scalable method to achieve load balance at exascales given the optimal parameters SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales
More Information • More information: • http://datasys.cs.iit.edu/~kewang/ • http://datasys.cs.iit.edu/projects/SimMatrix/ • Contact: • kwang22@hawk.iit.edu • Questions? SimMatrix: SIMulator for MAny-Task computing execution fabRIc at eXascales