220 likes | 466 Views
Dynamic Load Balancing in Scientific Simulation. Angen Zheng. Static Load Balancing: No Data Dependency. No Communication among PUs. PU 1. Computations. Initial Load. Unchanged Load Distribution. PU 2. PU 3. Distribute the load evenly across processing unit.
E N D
Dynamic Load Balancing in Scientific Simulation Angen Zheng
Static Load Balancing: No Data Dependency No Communication among PUs. PU 1 Computations Initial Load UnchangedLoad Distribution PU 2 PU 3 • Distribute the load evenly across processing unit. • Is this good enough? It depends! • No data dependency! • Load distribution remain unchanged! Initial Balanced Load Distribution
Static Load Balancing: Data Dependency PUs need to communicate with each other to carry out the computation. PU 1 Computation Initial Load UnchangedLoad Distribution PU 2 PU 3 • Distribute the load evenly across processing unit. • Minimize inter-processing-unit communication! • By collocating the most communicating data into a single PU. Initial Balanced Load Distribution
Load Balancing in Scientific Simulation PUs need to communicate with each other tocarry out the computation. PU 1 Iterative Computation Steps Repartitioning Initial Load PU 2 PU 3 • Distribute the load evenly across processing unit. • Minimize inter-processing-unit communication! • By collocating the most communicating data into a single PU. • Minimize data migration among processing units. ImbalancedLoad Distribution Balanced Load Distribution Initial Balanced Load Distribution Dynamic Load Balancing
DynamicLoad Balancing: (Hyper)graph Partitioning • Given a (Hyper)graph G=(V, E). • (Hyper)graph Partitioning • Partition V into k partitions P0, P1, … Pk, such that all parts • Disjoint: P0 U P1 U … Pk = V and Pi ∩ Pj = Ø where i ≠ j. • Balanced: |Pi| ≤ (|V| / k) * (1 + ᵋ) • Edge-cut is minimized: edges crossing different parts. Bcomm= 3
DynamicLoad Balancing: (Hyper)graph Repartitioning • Given a partitioned (Hyper)graph G=(V, E). • (Hyper)graph Repartitioning • Repartition V into k partitions P0, P1, … Pk, such that all parts • Disjoint. • Balanced. • Minimal Edge-cut. • Minimal Migration. Bcomm = 4 Bmig =2 Repartitioning Initial Partitioning
DynamicLoad Balancing: (Hypergraph) Repartition-Based • Building the (Hyper)graph • Vertices represent data. • Vertex object size reflects the amount of the data per vertex. • Vertex weight accounts for computation per vertex. • Edges reflects data dependencies. • Edge weight represents the communication among vertices. PU1 Repartitioning the Updated (Hyper)graph Iterative Computation Steps Build the Initial (Hyper)graph PU2 PU3 Update the Initial (Hyper)graph Load Distribution After Repartitioning Initial Partitioning Reduce the Dynamic Load Balancing to a (Hyper)graph Repartitioning Problem.
(Hypergraph) Repartition-Based DynamicLoad Balancing: Cost Model
(Hypergraph) Repartition-Based DynamicLoad Balancing: Network Topology
(Hypergraph) Repartition-Based DynamicLoad Balancing: Cache-Hierarchy PU1 Iterative Computation Steps Rebalancing Initial (Hyper)graph PU2 PU3 Updated (Hyper)graph Migration Once After Repartitioning Initial Partitioning
Hierarchical Topology-Aware (Hyper)graph Repartition-BasedDynamicLoad Balancing • Inter-Node Repartitioning: • Goal: Group the most communicating data into compute nodes closed to each other. • Solution: • Regrouping. • Repartitioning. • Refinement. • Intra-Node Repartitioning: • Goal: Group the most communicating data into cores sharing more level or caches. • Solution#1: Hierarchical repartitioning. • Solution#2: Flat repartitioning.
Hierarchical Topology-Aware (Hyper)graph Repartition-BasedDynamicLoad Balancing • Inter-Node Repartitioning • Regrouping. • Repartitioning. • Refinement. Regrouping
Hierarchical Topology-Aware (Hyper)graph Repartition-BasedDynamicLoad Balancing • Inter-Node (Hyper)graph Repartitioning • Regrouping. • Repartitioning. • Refinement. Repartitioning Migration Cost: 2 (inter-node) + 2 (intra-node) Communication Cost: 3 (inter-node)
Topology-Aware Inter-Node (Hyper)graph Repartitioning • Inter-Node (Hyper)graph Repartitioning • Regrouping. • Repartitioning. • Refinement. Refinement Migration Cost: 2 (intra-node) Communication Cost: 3 (inter-node)
Hierarchical Topology-Aware Intra-Node (Hyper)graph Repartitioning • Main Idea: Repartition the subgraph assigned to each node hierarchically according to the cache hierarchy. 0 1 2 3 4 5 0 1 2 3 4 5 3 5 1 2 4 0 4 2 3 5 0 1
Flat Topology-Aware Intra-Node (Hyper)graph Repartition Old Partition Assignment Old Partition
Flat Topology-Aware Intra-Node (Hyper)graph Repartition Old Partition New Partition
Flat Topology-Aware Intra-Node (Hyper)graph Repartition Old Partition Assignment Partition Migration Matrix New Partition Partition Communication Matrix
Flat Topology-Aware Intra-Node (Hyper)graph Repartition Partition Migration Matrix New Partition Partition Communication Matrix
Major References • [1] K. Schloegel, G. Karypis, and V. Kumar, Graph partitioning for high performance scientific simulations. Army High Performance Computing Research Center, 2000. • [2] B. Hendrickson and T. G. Kolda, Graph partitioning models for parallel computing," Parallel computing, vol. 26, no. 12, pp. 1519~1534, 2000. • [3] K. D. Devine, E. G. Boman, R. T. Heaphy, R. H.Bisseling, and U. V. Catalyurek, Parallel hypergraph partitioning for scientific computing," in Parallel and Distributed Processing Symposium, 2006. IPDPS2006. 20th International, pp. 10-pp, IEEE, 2006. • [4] U. V. Catalyurek, E. G. Boman, K. D. Devine,D. Bozdag, R. T. Heaphy, and L. A. Riesen, A repartitioning hypergraph model for dynamic load balancing," Journal of Parallel and Distributed Computing, vol. 69, no. 8, pp. 711~724, 2009. • [5] E. Jeannot, E. Meneses, G. Mercier, F. Tessier,G. Zheng, et al., Communication and topology-aware load balancing in charm++ with treematch," in IEEE Cluster 2013. • [6] L. L. Pilla, C. P. Ribeiro, D. Cordeiro, A. Bhatele,P. O. Navaux, J.-F. Mehaut, L. V. Kale, et al., Improving parallel system performance with a numa-aware load balancer," INRIA-Illinois Joint Laboratory on Petascale Computing, Urbana, IL, Tech. Rep. TR-JLPC-11-02, vol. 20011, 2011.