540 likes | 708 Views
The MRNet Tree-based Overlay Network. Where We’ve Been, Where We’re Going!. Dorian Arnold Paradyn Project University of Wisconsin. Philip C. Roth Future Technologies Group Oak Ridge National Laboratory. Abstract. Large scale systems are here T ree- b ased O verlay N etworks ( TB Ō Ns )
E N D
The MRNet Tree-based Overlay Network Where We’ve Been,Where We’re Going! Dorian Arnold Paradyn Project University of Wisconsin Philip C. Roth Future Technologies Group Oak Ridge National Laboratory
Abstract • Large scale systems are here • Tree-based Overlay Networks (TBŌNs) • Intuitive, seemingly restrictive • Effective model for tool scalability • Prototype: www.paradyn.org/mrnet • Where we’ve been • Tool scalability • Programming model for large class of applications • Where we’re going • Topology studies • TBŌNs on high-performance networks • Filters on hardware accelerators • Reliability
HPC Trends from . No Data Available November ’05processor count distribution. Growth in 1024-processor systems. Easier than ever to deploy thousands of processors (one BG/L rack!)
Hierarchical Distributed Systems • Hierarchical Topologies • Application Control • Data collection • Data reduction/analysis • As scale increases, front-end becomes a bottleneck FE … BE BE BE BE
TBŌNs for Scalable Systems TBŌNs for scalability • Scalable multicast • Scalable gather • Scalable data aggregation FE CP CP CP CP … BE BE BE BE
TBŌN Model Application Front-end FE CP CP Tree ofCommunication Processes CP CP … BE BE BE BE Application Back-ends
TBŌN Model Reliable FIFO channels • Non-lossy • Duplicate suppressing • Non-corrupting FE CP CP CP CP … BE BE BE BE
TBŌN Model FE Application-level packet CP CP Packet filter Filter state CP CP … BE BE BE BE Channel state
TBŌN Model Filter function: • Inputs a packet from each child • Outputs a single packet • Updates filter state {output, new_state } ≡ f ( inputs, cur_state )
TBŌNs at Work • Multicast • ALMI [Pendarakis, Shi, Verma and Waldvogel ’01] • End System Multicast [Chu, Rao, Seshan and Zhang ’02] • Overcast[Jannotti, Gifford, Johnson, Kaashoek and O’Toole ’00] • RMX [Chawathe, McCanne and Brewer ’00] • Multicast/gather (reduction) • Bistro (no reduction) [Bhattacharjee et al ’00] • Gathercast [Badrinath and Sudame ’00] • Lilith [Evensky, Gentile, Camp, and Armstrong ’97] • MRNet [Roth, Arnold and Miller ‘03] • Ygdrasil [Balle, Brett, Chen, LaFrance-Linden ’02] • Distributed monitoring/sensing • Ganglia [Sacerdoti, Katz, Massie, Culler ’03] • Supermon (reduction) [Sottile and Minnich ’02] • TAG (reduction) [Madden, Franklin, Hellerstein and Hong ’02]
Example TBŌN Reductions • Simple • Min, max, sum, count, average • Concatenate • Complex • Clock synchronization [ Roth, Arnold, Miller ’03] • Time-aligned aggregation [ Roth, Arnold,Miller ’03] • Graph merging [Roth, Miller ’05] • Equivalence relations [Roth, Arnold, Miller ‘03] • Mean-shift image segmentation [Arnold, Pack, Miller ‘06]
TBŌNs for Tool Scalability MRNet integrated into Paradyn • Efficient tool startup • Performance data analysis • Scalable visualization
TBŌNs for Scalable Applications • Many algorithms equivalence computation • Equivalence/non-equivalence to summarize/analyze input data • Streaming programming models • Possibly even for Bulk Synchronous Parallel programs
TBŌNs for Scalable Applications: Mean-Shift Algorithm • Clustering points in feature spaces • Useful for image segmentation • Prohibitively expensive as feature space complexity increases Window Centroid
TBŌNs for Scalable Applications: Mean-Shift Algorithm • Partition data into windows and calculate window densities • Keep windows above chosen density threshold • Run mean-shift on remaining windows • Keep local maxima as peaks
TBŌNs for Scalable Applications: Mean-Shift Algorithm • Uses MRNet as general purpose programming paradigm • Implements mean-shift in custom MRNet filters ~6x speedup withonly 6% more nodes
TBŌN Computational Model • At large scales, suitable for algorithms with: • Complexity ≥ O(n), n input size • Output size ≤ total input size* • Sometimes algorithm just runs faster on output (better-behaved input) • Output is in the same form as the inputs • E.g., if inputs are sets of elements, the output should be a set of elements
Research and Development Directions • TBŌN topology studies • TBŌNs and high-performance networks • Use of emerging technologies for TBŌN filters • TBŌN Reliability
TBŌN Topology We expect many factors influence “best” topology • Physical network topology and capabilities • Expected traffic (type and volume) • Desired reliability guarantees • Cost of “extra” nodes
TBŌN Topology Investigation • Previous studies used reasonable topologies • How factors influence performance remains an open question • Beginning rigorous effort to investigate this issue • Performance modeling • Empirical studies on variety of systems
High-end Network Support • Current MRNet implementation uses TCP/IP sockets • Many high-end networks provide TCP/IP support • E.g., IP over Quadrics QsNet • Flexible, but undesirable for performance reasons • Effort underway to support alternative data transports • One-sided, OS/application bypass • Complements topology investigations • Initially targeting Portals on Cray XT3
High-Performance Filters on Hardware Accelerators • Multi-paradigm computing (MPC) systems are here • MPC systems include several types of processors, such as FPGAs, multi-core processors, GPUs, PPUs, MTA processors • E.g., Cray Adaptive Supercomputing strategy, SRC Computers, Linux Networx, DRC FPGA co-processor • Streaming approach expected to work well for some types • Running filters on accelerators is natural fit for some applications, e.g. Sloan Digital Sky Survey and Large Synoptic Survey Telescope
TBŌN Reliability 1 System Size MTTF Given the emergence of TBŌNs forscalable computing, low-costreliability for TBŌN environmentsbecomes critical!
TBŌN Reliability • Goal • Tolerate process failures • Avoid checkpoint overhead • General concept: leverage TBŌN properties • Natural information redundancies • Computational semantics • Lost state may be replaced by non-identical state • Computational equivalence: relaxed consistency model • Zero-cost: no additional computation, storage or network overhead during normal operation • Define operations that compensate for lost state • Maintain computational equivalence
Fundamental to the TBŌN Model Input streams propagate toward root Persistent state summarizes input history Therefore, summary is replicated naturally as input propagates upstream TBŌN Information Redundancies
Recovery Strategy • if failure is detected then • Reconstruct tree • Regenerate compensatory state • Reintegrate state into tree • Resume normal operation • end if
State Regeneration: Composition fs( CPi ) CPi Parent’s state is composition of children’s CPk CPj fs( CPk ) fs( CPj )
CompositionOperator Parent’s state Child’s state Child’s state State Regeneration: Composition fs( CPi ) ≡ fs( CPj ) fs( CPk ) State composition: • Input filter state from children • Output computationally-equivalent state for parent
State Regeneration: Composition Where does this mysterious composition operation come from? Recall filter definition: {output, new_state } ≡ f (inputs, cur_state ) When filter’s new_state is copy of output;then fbecomes composition operator.
State Regeneration: Composition Proof Outline • State is history of processed inputs • Children’s output becomes parent’s input • Updated state is a copy of output • can be used as input to filter function • Filter execution on children’s state will produce computationally equivalent state for parent
State Regeneration: Composition Composition can also work when output is not a copy of the state! • Requires mapping operation from filter state to output form
State Composition Example { } CP0 { } { } CP2 CP1 CP3 CP4 CP5 CP6 3 1 1 1 4 5 5 8 3 3 1 9 1 4 1 5
State Composition Example { } CP0 { } { } CP2 CP1 3 1 1 1 CP3 CP4 CP5 CP6 4 5 5 8 3 3 1 9 1 4 1 5
State Composition Example { } CP0 {1,3} {1} {1,3} {1} CP2 CP1 4 5 5 8 CP3 CP4 CP5 CP6 3 3 1 9 1 4 1 5
State Composition Example {1,3} {1,3} CP0 {1,3,4,5} {1,5,8} {1,3,4,5} {1,5,8} CP2 CP1 3 3 1 9 CP3 CP4 CP5 CP6 1 4 1 5
State Composition Example {1,3} {1,3,4,5,8} {1,3,4,5,8} CP0 {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} CP2 CP1 1 4 1 5 CP3 CP4 CP5 CP6
State Composition Example {1,3} {1,3,4,5,8} {1,3,4,5,8,9} {1,3,4,5,8,9} CP0 {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} CP2 CP1 CP3 CP4 CP5 CP6
State Composition Example {1,3} {1,3,4,5,8} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} CP0 {1,3,4,5} {1,5,8,9} CP2 CP1 CP3 CP4 CP5 CP6
State Composition Example {1,3} {1,3} CP0 crashes! CP0 {1,3,4,5} {1,5,8} {1,3,4,5} {1,5,8} CP2 CP1 3 3 1 9 CP3 CP4 CP5 CP6 1 4 1 5
Use f on children’s state to regenerate computationally-consistent version of lost state State Composition Example {1,3} {1,3} CP0 {1,3,4,5} {1,5,8} {1,3,4,5} {1,5,8} CP2 CP1 3 3 1 9 CP3 CP4 CP5 CP6 1 4 1 5 fs( CP0 ) ≡ fs( CP1)fs( CP2 )
State Composition Example Non-identical, but computationally-consistent! {1,3} {1,3} {1,3,4,5,8} {1,3} CP0 CP0 {1,3,4,5} {1,5,8} {1,3,4,5} {1,5,8} CP2 CP1 {1,3,4,5} {1,5,8} CP2 CP1 3 3 1 9 3 3 1 9 CP3 CP4 CP5 CP6 CP3 CP4 CP5 CP6 1 4 1 5 1 4 1 5 fs( CP0 ) ≡ fs( CP1 ) fs( CP2 )
State Composition Example {1,3} {1,3} {1,3,4,5,8} {1,3,4,5,8} {1,3,4,5,8} CP0 CP0 {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} CP2 CP1 {1,3,4,5} {1,5,8,9} CP2 CP1 1 4 1 5 1 4 1 5 CP3 CP4 CP5 CP6 CP3 CP4 CP5 CP6
State Composition Example {1,3} {1,3} {1,3,4,5,8} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} CP0 CP0 {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} {1,3,4,5} {1,5,8,9} CP2 CP1 {1,3,4,5} {1,5,8,9} CP2 CP1 CP3 CP4 CP5 CP6 CP3 CP4 CP5 CP6
State Composition Example {1,3} {1,3} {1,3,4,5,8} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} {1,3,4,5,8,9} CP0 CP0 {1,3,4,5} {1,5,8,9} CP2 CP1 {1,3,4,5} {1,5,8,9} CP2 CP1 CP3 CP4 CP5 CP6 CP3 CP4 CP5 CP6
Reliability Highlights • Zero-cost TBŌN reliability requirements: • Associative/commutative filter function • Filter state and output have same representation, or • Known mapping from filter state representation to output form • Filter function used for regeneration • Many computations meet requirements
Other Issues • Compensating for lost messages • Use computational state to compensate • Idempotent/non-idempotent computations • Other state regeneration mechanisms • Decomposition • Failure detection • Tree reconstruction • Evaluation of the recovery process
MRNet References • Arnold, Pack, and Miller: “Tree-based Overlay Networks for Scalable Applications”, Workshop on High-Level Parallel Programming Models and Supportive Environments, April 2006. • Roth and Miller, “The Distributed Performance Consultant and the Sub-Graph Folding Algorithm: On-line Automated Performance Diagnosis on Thousands of Processes”, Principles and Practice of Parallel Programming, March 2006. • Schulz et al, “Scalable Dynamic Binary Instrumentation for Blue Gene/L”, Workshop on Binary Instrumentation and Applications, September, 2005. • Roth, Arnold and Miller, “Benchmarking the MRNet Distributed Tool Infrastructure: Lessons Learned”, 2004 High-Performance Grid Computing Workshop, April 2004. • Roth, Arnold, and Miller, “MRNet: A Software-Based Multicast/Reduction Network for Scalable Tools”, SC 2003, November 2003.
Summary • TBŌN model suitable for many types of tools, applications and algorithms • Future work: • Evaluation of reliability mechanisms • Coming real soon! • Performance modeling to support topology decisions • TBŌNs on emerging HPC networks and technologies • Other application areas like GIS, Bioinformatics, data mining, …
Funding Acknowledgements • This research is sponsored in part by The National Science Foundation under Grant EIA-0320708 • This research is also sponsored in part by the Office of Mathematical, Information, and Computational Sciences, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC. • Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes.