680 likes | 693 Views
MPI Verification. Ganesh Gopalakrishnan and Robert M. Kirby Students Yu Yang, Sarvani Vakkalanka, Guodong Li, Subodh Sharma, Anh Vo, Michael DeLisi, Geof Sawaya ( http://www.cs.utah.edu/formal_verification ) School of Computing University of Utah Supported by:
E N D
MPI Verification Ganesh Gopalakrishnan and Robert M. Kirby Students Yu Yang, Sarvani Vakkalanka, Guodong Li, Subodh Sharma, Anh Vo, Michael DeLisi, Geof Sawaya (http://www.cs.utah.edu/formal_verification) School of Computing University of Utah Supported by: Microsoft HPC Institutes NSF CNS 0509379
“MPI Verification”orHow to exhaustively verify MPI programs without the pain of model buildingand considering only “relevant interleavings”
Computing is at an inflection point (photo courtesy of Intel)
Our work pertains to these: • MPI programs • MPI libraries • Shared Memory Threads based on Locks
Name of the Game: Progress Through Precision • Precision in Understanding • Precision in Modeling • Precision in Analysis • Doing Modeling and Analysis with Low Cost
1. Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier Will P1’s Send Match P2’s Receive ?
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier It will ! Here is the animation
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Need for Precision in Understanding:The “crooked barrier” quiz P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Would you rather explain each conceivable situation in a large API with an elaborate “bee dance” and informal English…. or would you rather specify it mathematically and let the user calculate the outcomes? P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Executable Formal Specification can help validate our understanding of MPI … Visual Studio 2005 Verification Environment Phoenix Compiler MPIC IR TLA+ MPI Library Model TLA+ Prog. Model MPIC Program Model TLC Model Checker MPIC Model Checker FMICS 07 PADTAD 07 1/4/2020
2. Precision in Modeling:The “Byte-range Locking Protocol” ChallengeAsked to see if new protocol using MPI 1-sided was OK… flag start end 0 -1 -1 0 -1 -1 0 -1 -1 P0 P1
Precision in Modeling:The “Byte-range Locking Protocol” Challenge • Studied code • Wrote Promela Verification Model (a week) • Applied the SPIN Model Checker • Found Two Deadlocks Previously Unknown • Wrote Paper (EuroPVM / MPI 2006) with Thakur and Gropp – won one of the three best-paper awards • With new insight, Designed Correct AND Faster Protocol ! • Still, we felt lucky … what if we had missed the error while hand-modeling • Also hand-modeling was NO FUN – how about running the real MPI code “cleverly” ? P0 P1
4. Modeling and Analysis with Reduced Cost… Card Deck 1 Card Deck 0 0: 1: 2: 3: 4: 5: 0: 1: 2: 3: 4: 5: • Only the interleavings of the red cards matter • So don’t try all riffle-shuffles (12!) / (6!) (6!) = 924 • Instead just try TWO shuffles of the decks !!
What works for cards works for MPI(and for PThreads also) !! P1 (non-owner of window) P0 (owner of window) 0: MPI_Init 1: MPI_Win_lock 2: MPI_Accumulate 3: MPI_Win_unlock 4: MPI_Barrier 5: MPI_Finalize 0: MPI_Init 1: MPI_Win_lock 2: MPI_Accumulate 3: MPI_Win_unlock 4: MPI_Barrier 5: MPI_Finalize • These are the dependent operations • 504 interleavings without POR in this example • 2 interleavings with POR !!
4. Modeling and Analysis with Reduced Cost The “Byte-range Locking Protocol” Challenge • Studied code DID NOT STUDY CODE • Wrote Promela Verification Model (a week) NO MODELING • Applied the SPIN Model Checker NEW ISP VERIFIER • Found Two Deadlocks Previously Unknown FOUND SAME! • Wrote Paper (EuroPVM / MPI 2007) with Thakur and Gropp – won one of the three best-paper awards DID NOT WIN • Still, we felt lucky … what if we had missed the error while hand-modeling NO NEED TO FEEL LUCKY (NO LOST INTERLEAVING – but also did not foolishly do ALL interleavings) • Also hand-modeling was NO FUN – how about running the real MPI code “cleverly” ? DIRECT RUNNING WAS FUN P0 P1
3. Precision in AnalysisThe “crooked barrier” quiz again … P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier Our Cluster NEVER gave us the P0 to P2 match !!! Elusive Interleavings !! Bites you the hardest when you port to new platform !!
3. Precision in AnalysisThe “crooked barrier” quiz again … P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier SOLVED!! Using the new POE Algorithm Partial Order Reduction in the presence of Out of Order Operations and Elusive Interleavings
Precision in Analysis • POE Works Great (all 41 Umpire Test-Suites Run) • No need to “pad” delay statements to jiggle schedule and force “the other” interleaving • This is a very brittle trick anyway! • Prelim Version Under Submission • Detailed Version for EuroPVM… • Jitterbug uses this approach • We don’t need it • Siegel (MPI_SPIN): Modeling effort • Marmot : Different Coverage Guarantees.. P0 P1
1-4: Finally! Precision and Low Cost in Modeling and Analysis, taking advantage of MPI semantics (in our heads…) P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier This is how POE does it
Discover All Potential Senders by Collecting (but not issuing) operations at runtime… P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( ANY ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Rewrite “ANY” to ALL POTENTIAL SENDERS P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( P0 ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Rewrite “ANY” to ALL POTENTIAL SENDERS P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( P1 ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
Recurse over all such configurations ! P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( P1 ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier
If we now have P0-P2 doing this, and P3-5 doing the same computation between themselves, no need to interleave these groups… P1 --- MPI_Barrier MPI_Isend( P2 ) P2 --- MPI_Irecv ( * ) MPI_Barrier P4 --- MPI_Barrier MPI_Isend( P5 ) P5 --- MPI_Irecv ( * ) MPI_Barrier P0 --- MPI_Isend ( P2 ) MPI_Barrier P3 --- MPI_Isend ( P5 ) MPI_Barrier
MPI is the de-facto standard for programming cluster machines (Image courtesy of Steve Parker, CSAFE, Utah) (BlueGene/L - Image courtesy of IBM / LLNL) Our focus:Help Eliminate Concurrency Bugs from HPC ProgramsApply similar techniques for other APIs also (e.g. PThreads, OpenMP)
The success of MPI (Courtesy of Al Geist, EuroPVM / MPI 2007)
The Need for Formal Semantics for MPI • Rendezvous mode • Blocking mode • Non-blocking mode • Reliance on system buffering • User-attached buffering • Restarts/Cancels of MPI Operations • Send • Receive • Send / Receive • Send / Receive / Replace • Broadcast • Barrier • Reduce An MPI program is an interesting (and legal) combination of elements from these spaces • Non Wildcard receives • Wildcard receives • Tag matching • Communication spaces
MPI Library Implementations Would Also Change Multi-core – how it affects MPI (Courtesy, Al Geist) Need Formal Semantics for MPI, because we can’t imitate any existing implementation… • The core count rises but the number of pins on a socket is fixed. This accelerates the decrease in the bytes/flops ratio per socket. • The bandwidth to memory (per core) decreases • The bandwidth to interconnect (per core) decreases • The bandwidth to disk (per core) decreases
We are only after “low hanging” bugs… • Look for commonly committed mistakes automatically • Deadlocks • Communication Races • Resource Leaks
Deadlock pattern… P0 P1 --- --- s(P1); s(P0); r(P1); r(P0); P0 P1 --- --- Bcast; Barrier; Barrier; Bcast; 1/4/2020
Communication Race Pattern… P0 P1 P2 --- --- --- r(*); s(P0); s(P0); r(P1); OK P0 P1 P2 --- --- --- r(*); s(P0); s(P0); r(P1); NOK 1/4/2020
Resource Leak Pattern… P0 --- some_allocation_op(&handle); FORGOTTEN DEALLOC !! 1/4/2020
Partial Order Reduction Illustrated… 1/4/2020 With 3 processes, the size of an interleaved state space is ps=27 Partial-order reduction explores representative sequences from each equivalence class Delays the execution of independent transitions In this example, it is possible to “get away” with 7 states (one interleaving)
A Deadlock Example… (off by one deadlock) p0:fr 0 p0:fr 1 p0:fr 2 p1:to 0 p2:to 0 p3:to 0 1/4/2020 // Add-up integrals calculated by each process if (my_rank == 0) { total = integral; for (source = 0; source < p; source++) { MPI_Recv(&integral, 1, MPI_FLOAT,source, tag, MPI_COMM_WORLD, &status); total = total + integral; } } else { MPI_Send(&integral, 1, MPI_FLOAT, dest, tag, MPI_COMM_WORLD); }
Simplifications PMPI calls request/permit request/permit Organization of ISP MPI Program executable Simplified MPI Program scheduler Proc 1 compile Proc n Actual MPI Library and Runtime
Summary (have posters for each) • Formal Semantics for a large subset of MPI 2.0 • Executable semantics for about 150 MPI 2.0 functions • User interactions through VisualStudio API • Direct execution of user MPI programs to find issues • Downscale code, remove data that does not affect control, etc • New Partial Order Reduction Algorithm • Explores only Relevant Interleavings • User can insert barriers to contain complexity • New Vector-Clock algorithm determines if barriers are safe • Errors detected • Deadlocks • Communication races • Resource leaks • Direct execution of PThread programs to find issues • Adaptation of Dynamic Partial Order Reduction reduces interleavings • Parallel implementation – scales linearly
instrumentation compile request/permit request/permit Also built POR explorer for C / Pthreads programs, called “Inspect” Multithreaded C/C++ program executable instrumented program scheduler thread 1 thread n Thread library wrapper