440 likes | 576 Views
On-line Automated Performance Diagnosis on Thousands of Processors. Philip C. Roth. Future Technologies Group Computer Science and Mathematics Division Oak Ridge National Laboratory. Paradyn Research Group Computer Sciences Department University of Wisconsin-Madison.
E N D
On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future Technologies Group Computer Science and Mathematics Division Oak Ridge National Laboratory Paradyn Research Group Computer Sciences Department University of Wisconsin-Madison
High Performance Computing Today • Large parallel computing resources • Tightly coupled systems (Earth Simulator, BlueGene/L, XT3) • Clusters (LANL Lightning, LLNL Thunder) • Grid • Large, complex applications • ASCI Blue Mountain job sizes (2001) • 512 cpus: 17.8% • 1024 cpus: 34.9% • 2048 cpus: 19.9% • Small fraction of peak performance is the rule
Achieving Good Performance • Need to know what and where to tune • Diagnosis and tuning tools are critical for realizing potential of large-scale systems • On-line automated tools are especially desirable • Manual tuning is difficult • Finding interesting data in large data volume • Understanding application, OS, hardware interactions • Automated tools require minimal user involvement; expertise is built into the tool • On-line automated tools can adapt dynamically • Dynamic control over data volume • Useful results from a single run • But: tools that work well in small-scale environments often don’t scale
Barriers to Large-Scale Performance Diagnosis d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 • Managing performance data volume • Communicating efficiently between distributed tool components • Making scalable presentation of data and analysis results Tool Front End Tool Daemons App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1
Our Approach for Addressing These Scalability Barriers • MRNet: multicast/reduction infrastructure for scalable tools • Distributed Performance Consultant: strategy for efficiently finding performance bottlenecks in large-scale applications • Sub-Graph Folding Algorithm: algorithm for effectively presenting bottleneck diagnosis results for large-scale applications
Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary
Performance Consultant • Automated performance diagnosis • Search for application performance problems • Start with global, general experiments (e.g., test CPUbound across all processes) • Collect performance data using dynamic instrumentation • Collect only the data desired • Remove the instrumentation when no longer needed • Make decisions about truth of each experiment • Refine search: create more specific experiments based on “true” experiments (those whose data is above user-configurable threshold)
Performance Consultant c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549
Performance Consultant c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary
MRNet: Multicast/Reduction Overlay Network • Parallel tool infrastructure providing: • Scalable multicast • Scalable data synchronization and transformation • Network of processes between tool front-end and back-ends • Useful for parallelizing and distributing tool activities • Reduce latency • Reduce computation and communication load at tool front-end • Joint work with Dorian Arnold (University of Wisconsin-Madison)
Typical Parallel Tool Organization Tool Front End Tool Daemons d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1
MRNet-based Parallel Tool Organization Internal Process Filter Multicast/ Reduction Network Tool Front End Tool Daemons d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1
Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary
Performance Consultant: Scalability Barriers • MRNet can alleviate scalability problem for global performance data (e.g., CPU utilization across all processes) • But front-end still processes local performance data (e.g., utilization of process 5247 on host mcr398.llnl.gov)
Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Distributed Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Distributed Performance Consultant: Variants • Natural steps from traditional centralized approach (CA) • Partially Distributed Approach (PDA) • Distributed local searches, centralized global search • Requires complex instrumentation management • Truly Distributed Approach (TDA) • Distributed local searches only • Insight into global behavior from combining local search results (e.g., using Sub-Graph Folding Algorithm) • Simpler tool design than PDA
Distributed Performance Consultant: PDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Distributed Performance Consultant: TDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549 … … c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp{367} myapp{4287} myapp{27549} main main main … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Distributed Performance Consultant: TDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549 … … Sub-Graph Folding Algorithm c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp{367} myapp{4287} myapp{27549} main main main … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …
Outline • Paradyn and the Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary
Search History Graph Example CPUbound c33.cs.wisc.edu c34.cs.wisc.edu main myapp{7624} myapp{1272} myapp{1273} myapp{7625} A B main C main main main A B D A B A B A B C C C C D D D E D
Search History Graphs • Search History Graph is effective for presenting search-based performance diagnosis results… • …but it does not scale to a large number of processes because it shows one sub-graph per process
Sub-Graph Folding Algorithm • Combines host-specific sub-graphs into composite sub-graphs • Each composite sub-graph represents a behavioral category among application processes • Dynamic clustering of processes by qualitative behavior
SGFA: Example c*.cs.wisc.edu myapp{*} D E CPUbound c33.cs.wisc.edu c34.cs.wisc.edu main myapp{7624} myapp{1272} myapp{1273} myapp{7625} A B main C main main main A B D A B A B A B C C C C D D D E D
SGFA: Implementation • Custom MRNet filter • Filter in each MRNet process keeps folded graph of search results from all reachable daemons • Updates periodically sent upstream • By induction, filter in front-end holds entire folded graph • Optimization for unchanged graphs
Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary
DPC + SGFA: Evaluation • Modified Paradyn to perform bottleneck searches using CA, PDA, or TDA approach • Modified instrumentation cost tracking to support PDA • Track global, per-process instrumentation cost separately • Simple fixed-partition policy for scheduling global and local instrumentation • Implemented Sub-Graph Folding Algorithm as custom MRNet filter to support TDA (used by all) • Instrumented front-end, daemons, and MRNet internal processes to collect CPU, I/O load information
DPC + SGFA: Evaluation • su3_rmd • QCD pure lattice gauge theory code • C, MPI • Weak scaling scalability study • LLNL MCR cluster • 1152 nodes (1048 compute nodes) • Two 2.4 GHz Intel Xeons per node • 4 GB memory per node • Quadrics Elan3 interconnect (fat tree) • Lustre parallel file system
DPC + SGFA: Evaluation • PDA and TDA: bottleneck searches with up to 1024 processes so far, limited by partition size • CA: scalability limit at less than 64 processes • Similar qualitative results from all approaches
Summary • Tool scalability is critical for effective use of large-scale computing resources • On-line automated performance tools are especially important at large scale • Our approach: • MRNet • Distributed Performance Consultant (TDA) plus Sub-Graph Folding Algorithm
References • P.C. Roth, D.C. Arnold, and B.P. Miller, “MRNet: a Software-Based Multicast/Reduction Network for Scalable Tools,” SC 2003, Phoenix, Arizona, November 2003 • P.C. Roth and B.P. Miller, “The Distributed Performance Consultant and the Sub-Graph Folding Algorithm: On-line Automated Performance Diagnosis on Thousands of Processes,” in submission • Publications available from http://www.paradyn.org • MRNet software available from http://www.paradyn.org/mrnet