100 likes | 111 Views
This talk discusses the motivation for PGAS performance tools, the GASP interface, and the Parallel Performance Wizard. It also explores future directions for improving PGAS performance tool support.
E N D
Performance Analysis Tools for Partitioned Global-Address-Space Programming Models Adam Leko1, Hung-Hsun Su1, Dan Bonachea2, Max Billingsley III1, Alan D. George1 1 Electrical & Computer Engineering Dept., University of Florida 2 Computer Science Dept., UC Berkeley
Outline of Talk • Motivation for PGAS performance tools and tool interface • High-level overview of GAS tool interface, GASP • Overview and demonstration of Parallel Performance Wizard • Conclusions & Future Work
Performance Tools Motivation • PGAS Models • Offer simple, convenient way to express parallelism • Requires an increase in compiler & runtime sophistication • printf()-style performance tuning doesn’t cut it • Doesn’t scale to large runs (time or parallelism) • Not enough information available to programmer • Need for good PGAS performance tools • Good tools help user productivity • Currently, poor support from existing tools • PGAS models are still relatively new, but… • Tool support requires tight interaction with PGAS implementations • Implementations of same language can vary greatly • Solution: generalized performance tool interface (GASP)
GASP Overview • Global Address Space Performance (GASP) interface • Event-based interface • GAS compiler/runtime communicate with performance tools using standard interface • Performance tool is notified when particular actions happen at runtime • Implementation-agnostic • Notification structure • Function “callback” to tool developer code • Use a single function name (gasp_event_notify) • Notifications can come from compiler/runtime (system events) or from code (user events) • Allows calls to the source language/library to make model-specific queries • Flexible instrumentation methods supported • Formal specification available at http://www.hcs.ufl.edu/upc/gasp/
Parallel Performance Wizard • New performance tool designed for PGAS languages • UPC and SHMEM in particular • GASP interface developed to meet needs of this work • Beta version available:http://www.hcs.ufl.edu/ppw/ • Java WebStart version of GUI available for immediate testing
PPW + GASP Preliminary Overheads UPC version of NAS NPB benchmarks (class “B”) on Berkeley UPC v2.3.16 (32-node Quadrics QsNetII 2.0 GHz Opteron cluster)
What Does This Mean for Users? • Paula the PGAS Programmer just wrote an application in UPC • Performance issues, wants to know why • Before: printf() & grep / perl • Now: PPW • Demo time!
Vendor Support • UPC • Berkeley UPC • GASP implemented within runtime library • Supported with Berkeley UPC 2.3.16 • --enable-profile configure-time option • Other UPC implementations • GASP support pending • Other PGAS model implementations • Titanium & SHMEM GASP support is inthe pipeline • Spec definitions for otherlanguages/libraries forthcoming
Conclusions & Future Directions • Conclusions • GASP interface initial implementation overhead results promising (validate approach) • PPW shows usefulness of fine-grained performance data • Interface can be helpful for advanced end-users • Future directions • Help add GASP support to other PGAS language implementations • Extend GASP to support other models (CAF, …) • Help other tools take advantage of GASP support • For more information on PPW and GASP, please see • http://www.hcs.ufl.edu/upc/gasp/ • http://www.hcs.ufl.edu/ppw/ • Beta testers encouraged for PPW!