1 / 13

An Evaluation of Current High-Performance Networks

An Evaluation of Current High-Performance Networks. Christian Bell, Dan Bonachea, Yannick Cote, Jason Duell, Paul Hargrove, Parry Husbands, Costin Iancu, Michael Welcome, Kathy Yelick Lawrence Berkeley National Lab & U.C. Berkeley. http://upc.lbl.gov. Motivation.

Download Presentation

An Evaluation of Current High-Performance Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Evaluation of Current High-Performance Networks Christian Bell, Dan Bonachea, Yannick Cote, Jason Duell, Paul Hargrove, Parry Husbands, Costin Iancu, Michael Welcome, Kathy Yelick Lawrence Berkeley National Lab & U.C. Berkeley http://upc.lbl.gov

  2. Motivation • Benchmark a variety of current high-speed Networks • Measure Latency and Software Overhead, not just Bandwidth • One-sided communication provides advantages vs. 2-sided MPI? • Global Address Space(GAS) Languages • UPC, Titanium (Java), Co-Array Fortran • Small message performance (8 bytes) • Support sparse/irregular/adaptive programs • Programming model: incremental optimization • Overlapping messages can hide the latency

  3. Systems Evaluated

  4. LogGP: no overlap P0 P0 osend osend L orecv orecv P1 P1 Modified LogGP Model • Observed: overheads can overlap: L can be negative EEL: end to end latency (instead of transport latency L) g: minimum time between small message sends G: additional gap per byte for larger messages

  5. Microbenchmarks Ping-pong test: measures EEL (end-to-end latency) Flood test: measures gap (g/G) CPU overlap test: measures software overheads P0 P0 P0 osend osend osend gap gap gap cpu cpu CPU Test 1 CPU Test 2 Flood Test

  6. Latencies for 8 byte ‘puts’

  7. 8-byte ‘put’ Latencies with Software Overheads

  8. Gap varies with msg clustering Clustering messages can both use idle cycles, and reduce the number of idle cycles that need to be filled

  9. Potential for CPU overlap during clustered message sends Hardware support for 1-way communication provides more opportunity for computational overlap

  10. Fixed message cost (g), vs. per-byte cost (G)

  11. “Large” Messages Factor of 6 between minimum sizes needed for “large” message (large = bandwidth dominates fixed message cost)

  12. Small message performance over time Software send overhead for 8-byte messages over time. Not improving much over time (even in absolute terms)

  13. Conclusion • Latency and software overhead of messages varies widely among today’s HPC networks • Affects ability to effectively mask communication latency, with large effect on GAS language viability • especially software overhead--latency can be hidden • These parameters have historically been overlooked in benchmarks and vendor evaluations • Hopefully this will change • Recent discussions with vendors promising • Incorporation into standard benchmarks would be nice… http://upc.lbl.gov

More Related