1 / 49

Performance Technology for Complex Parallel Systems Part 2 – Complexity Scenarios Sameer Shende

Performance Technology for Complex Parallel Systems Part 2 – Complexity Scenarios Sameer Shende. Goals. Explore performance analysis issues in different parallel computing and programming contexts Demonstrate TAU’s usage in different parallel application case studies

kalona
Download Presentation

Performance Technology for Complex Parallel Systems Part 2 – Complexity Scenarios Sameer Shende

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Technology forComplex Parallel SystemsPart 2 – Complexity ScenariosSameer Shende

  2. Goals • Explore performance analysis issues in different parallel computing and programming contexts • Demonstrate TAU’s usage in different parallel application case studies • Explore ways to bridge the semantic gap between entities that tools present and the user’s abstractions • Highlight TAU performance mapping API

  3. Complexity Scenarios • Message passing computation • Observe message communication events • Associate messaging events with program events • SPMD applications with multiple procesess • SIMPLE hydrodynamics application (C, MPI) • Multi-threaded computation • (Abstract) thread-based performance measurement • Multi-threaded parallel execution • Asynchronous runtime system scheduling • Multi-threading performance analysis in Java

  4. Complexity Scenarios (continued) • Mixed-mode parallel computation • Portable shared memory and message passing APIs • Integrate messaging events with multi-threading events • OpenMP + MPI, Java + MPI • Object-oriented programming and C++ • Object-based performance analysis • Performance measurement of template-derived code • Array classes and expression transformation • Hierarchical parallel software and module composition • Multi-level software framework and work scheduling • Module-specific performance mapping

  5. Strategies for Empirical Performance Evaluation • Empirical performance evaluation as a series of performance experiments • Experiment trials describing instrumentation and measurement requirements • Where/When/How axes of empirical performance space • where are performance measurements made in program • when is performance instrumentation done • how are performance measurement/instrumentation chosen • Strategies for achieving flexibility and portability goals • Limited performance methods restrict evaluation scope • Non-portable methods force use of different techniques • Integration and combination of strategies

  6. Multi-Level Instrumentation in TAU

  7. Multi-Level Instrumentation • Uses multiple instrumentation interfaces • Shares information: cooperation between interfaces • Taps information at multiple levels • Provides selective instrumentation at each level • Targets a common performance model • Presents a unified view of execution

  8. SIMPLE Performance Analysis • SIMPLE hydrodynamics benchmark • C code with MPI message communication • Multiple instrumentation methods • source-to-source translation (PDT) • MPI wrapper library level instrumentation (PMPI) • pre-execution binary instrumentation (DyninstAPI) • Alternative measurement strategies • statistical profiles of software actions • statistical profiles of hardware actions (PCL, PAPI) • program event tracing • choice of time source • gettimeofday, high-res physical, CPU, process virtual

  9. SIMPLE Source Instrumentation (Preprocessed) int compute_heat_conduction( double theta_hat[X][Y], double deltat, double new_r[X][Y], double new_z[X][Y], double new_alpha[X][Y], double new_rho[X][Y], double theta_l[X][Y], double Gamma_k[X][Y], double Gamma_l[X][Y]) { TAU_PROFILE("intcompute_heat_conduction( double (*)[259], double, double (*)[259], double (*)[259], double (*)[259], double (*)[259], double (*)[259], double (*)[259], double (*)[259])", " ", TAU_USER); … } • Similarly, for all other routines in SIMPLE program

  10. MPI Library Instrumentation (MPI_Send) int MPI_Send(…)…{ int returnVal, typesize;TAU_PROFILE_TIMER(tautimer, "MPI_Send()", " ", TAU_MESSAGE); TAU_PROFILE_START(tautimer); if (dest != MPI_PROC_NULL) {PMPI_Type_size( datatype, &typesize );TAU_TRACE_SENDMSG(tag, dest, typesize*count); } returnVal = PMPI_Send( buf, count, datatype, dest, tag, comm );TAU_PROFILE_STOP(tautimer); return returnVal;}

  11. MPI Library Instrumentation (MPI_Recv) int MPI_Recv(…)…{ int returnVal, size;TAU_PROFILE_TIMER(tautimer, "MPI_Recv()", " ", TAU_MESSAGE);TAU_PROFILE_START(tautimer); returnVal = PMPI_Recv(buf, count, datatype, src, tag, comm, status); if (src != MPI_PROC_NULL && returnVal == MPI_SUCCESS) {PMPI_Get_count( status, MPI_BYTE, &size );TAU_TRACE_RECVMSG(status->MPI_TAG, status->MPI_SOURCE, size); }TAU_PROFILE_STOP(tautimer);| return returnVal;}

  12. Multi-Level Instrumentation (Profiling) four processes profile per process globalroutineprofile SC’01 Tutorial

  13. Multi-Level Instrumentation (Tracing) • No modification of source instrumentation! TAU performance groups

  14. Dynamic Instrumentation of SIMPLE • Uses DynInstAPI for runtime code patching • Mutator loads measurement library, instruments mutatee • one mutator (tau_run) per executable image • mpirun –np <n> tau.shell

  15. Multi-Threading Performance Measurement • General issues • Thread identity and per-thread data storage • Performance measurement support and synchronization • Fine-grained parallelism • different forms and levels of threading • greater need for efficient instrumentation • TAU general threading and measurement model • Common thread layer and measurement support • Interface to system specific libraries (reg, id, sync) • Target different thread systems with core functionality • Pthreads, Windows, Java, SMARTS, Tulip, OpenMP

  16. Java Multi-Threading Performance (Test Case) • Profile and trace Java (JDK 1.2+) applications • Observe user-level and system-level threads • Observe events for different Java packages • /lang, /io, /awt, … • Test application • SciVis, NPAC, Syracuse University % ./configure -jdk=<dir_where_jdk_is_installed> % setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH\:<taudir>/<arch>/lib % java -XrunTAU svserver

  17. TAU Profiling of Java Application (SciVis) 24 threads of execution! Profile for eachJava thread Captures eventsfor different Javapackages globalroutineprofile

  18. TAU Tracing of Java Application (SciVis) Performance groups Timeline display Parallelism view

  19. Vampir Dynamic Call Tree View (SciVis) Per thread call tree Expandedcall tree Annotated performance

  20. Virtual Machine Performance Instrumentation • Integrate performance system with VM • Captures robust performance data (e.g., thread events) • Maintain features of environment • portability, concurrency, extensibility, interoperation • Allow use in optimization methods • JVM Profiling Interface (JVMPI) • Generation of JVM events and hooks into JVM • Profiler agent (TAU) loaded as shared object • registers events of interest and address of callback routine • Access to information on dynamically loaded classes • No need to modify Java source, bytecode, or JVM

  21. JVMPI Events • Method transition events • Memory events • Heap arena events • Garbage collection events • Class events • Global reference events • Monitor events • Monitor wait events • Thread events • Dump events • Virtual machine events

  22. Java program Thread API JNI Event notification TAU JVMPI Profile DB TAU Java JVM Instrumentation Architecture • Robust set of events • Portability • Access to thread info • Measurement options • Limitations • Overhead • Many events • Event control • No user-defined events

  23. TAU Java Source Instrumentation Architecture • Any code section can be measured • Portability • Measurement options • Profiling, tracing • Limitations • Source access only • Lack of thread information • Lack of node information Java program TAU.Profile class (init, data, output) TAU package JNI C bindings JNI TAU as dynamic shared object TAU Profile database stored in JVM heap Profile DB

  24. Java Source-Level Instrumentation • TAU Java package • User-defined events • TAU.Profile class for new “timers” • Start/Stop • Performance data output at end

  25. Mixed-mode Parallel Programs (OpenMPI + MPI) • Portable mixed-mode parallel programming • Multi-threaded shared memory programming • Inter-node message passing • Performance measurement • Access to runtime system and communication events • Associate communication and application events • 2-Dimensional Stommel model of ocean circulation • OpenMP for shared memory parallel programming • MPI for cross-box message-based parallelism • Jacobi iteration, 5-point stencil • Timothy Kaiser (San Diego Supercomputing Center)

  26. Stommel Instrumentation • Use PMPI for message communication events • OpenMP directive instrumentation (see OPARI in Part 3) <SHOW CODE SAMPLE>

  27. OpenMP + MPI Ocean Modeling (Trace) Threadmessagepairing IntegratedOpenMP +MPI events

  28. OpenMP + MPI Ocean Modeling (HW Profile) IntegratedOpenMP +MPI events FP instructions % configure -papi=../packages/papi -openmp -c++=pgCC -cc=pgcc -mpiinc=../packages/mpich/include -mpilib=../packages/mpich/lib

  29. Mixed-mode Parallel Programs (Java + MPI) • Explicit message communication libraries for Java • MPI performance measurement • MPI profiling interface - link-time interposition library • TAU wrappers in native profiling interface library • Send/Receive events and communication statistics • mpiJava (Syracuse, JavaGrande, 1999) • Java wrapper package • JNI C bindings to MPI communication library • Dynamic shared object (libmpijava.so) loaded in JVM • prunjava calls mpirun to distribute program to nodes • Contrast to Java RMI-based schemes (MPJ, CCJ)

  30. TAU package MPI profiling interface TAU TAU wrapper Native MPI library Native MPI library Profile DB TAU Java Instrumentation Architecture • No source instrumentation • Portability • Measurement options • Limitations • MPI events only • No mpiJava events • Node info only • No thread info Java program mpiJava package JNI

  31. Java Multi-threading and Message Passing • Java threads and MPI communications • Shared-memory multi-threading events • Message communications events • Unified performance measurement and views • Integration of performance mechanisms • Integrated association of performance events • thread event and communication events • user-defined (source-level) performance events • JVM events • Requires instrumentation and measurement cooperation

  32. Instrumentation and Measurement Cooperation • Problem • JVMPI doesn’t see MPI events (e.g., rank (node)) • MPI profiling interfaces doesn’t see threads • Source instrumentation doesn’t see either! • Need cooperation between interfaces • MPI exposes rank and gets thread information • JVMPI exposes thread information and gets rank • Source instrumentation gets both • Post-mortem matching of sends and receives • Selective instrumentation • java -XrunTAU:exclude=java/io,sun

  33. Thread API TAU Java Instrumentation Architecture Java program mpiJava package TAU package JNI MPI profiling interface Event notification TAU TAU wrapper Native MPI library JVMPI Profile DB

  34. Integrated event tracing Mergedtrace viz Nodeprocessgrouping Threadmessagepairing Vampirdisplay Multi-level event grouping Parallel Java Game of Life (Trace)

  35. Integrated Performance View (Callgraph) • Sourcelevel • MPIlevel • Javapackageslevel

  36. Object-Oriented Programming and C++ • Object-oriented programming is based on concepts of abstract data types, encapsulation, inheritance, … • Languages (such as C++) provide support implementing domain-specific abstractions in the form of class libraries • Furthermore, generic programming mechanisms allow for efficient coding abstractions and compile-time transformations • Creates a semantic gap between the transformed code and what the user expects (and describes in source code) • Need a mechanism to expose the nature of high-level abstract computation to the performance tools • Map low-level performance data to high-level semantics

  37. C++ Template Instrumentation (Blitz++, PETE) • High-level objects • Array classes • Templates (Blitz++) • Optimizations • Array processing • Expressions (PETE) • Relate performance data to high-level statement • Complexity of template evaluation Array expressions

  38. Standard Template Instrumentation Difficulties • Instantiated templates result in mangled identifiers • Standard profiling techniques / tools are deficient • Integrated with proprietary compilers • Specific systems platforms and programming models Uninterpretable routine names

  39. Blitz++ Library Instrumentation • Expression templates embed the form of the expression in a template name • In Blitz++, the library describes the structure of the expression template to the profiling toolkit • Allows for pretty printing the expression templates BinOp<Add, B, <BinOp<Subtract, C, <BinOp<Multiply, Scalar<2.0>, D>>> + B - C + 2.0 D Expression: B + C - 2.0 * D

  40. Blitz++ Library Instrumentation (example) • <show Blitz++ library with TAU instrumentation code>

  41. TAU Instrumentation and Profiling for C++ Racy Profile ofexpressiontypes Performance data presentedwith respect to high-levelarray expression types Graphical pprof

  42. Hierarchical Parallel Software (C-SAFE/Uintah) • Center for Simulation of Accidental Fires & Explosions • ASCI Level 1 center • PSE for multi-model simulation high-energy explosion • Uintah parallel programming framework • Component-based and object-parallel • Multi-model task-graph scheduling and execution • Shared-memory (thread), distributed-memory (MPI), and mixed-model parallelization • Integrated with SCIRun framework • TAU integration in Uintah • Mapping: task object  grid object  patch object

  43. Task Execution in Uintah Parallel Scheduler Task execution time dominates (what task?) MPI communication overheads (where?)

  44. Task Computation and Mapping • Task computations on individual particles generate work packets that are scheduled and executed • Work packets that “interpolate particles to grid” • Assign semantic name to a task abstraction • SerialMPM::interpolateParticleToGrid • Partition execution time among different tasks • Need to relate the performance of each particle computation (work packet) to the associated task • Map TAU timer object to task (abstract) computation • Further partition the performance data along different domain-specific axes (task, patches…) • Helps bridge the semantic-gap!

  45. Mapping Instrumentation (example) • <show actual instrumentation code example for following experiment>

  46. Work Packet – to – Task Mapping (Profile)

  47. Work Packet – to – Task Mapping (Trace) See work packet computation events colored by task type Distinct phases of computation can be identifed based on task

  48. Statistics for Relative Task Contributions

  49. Comparing Uintah Traces for Scalability Analysis

More Related