180 likes | 192 Views
This study explores the use of iterative methods on many-core GPUs, focusing on weak execution ordering and optimizing inter-block communication for improved performance. The CUDA programming model is reviewed, and applications with iterative PDE solvers are discussed, along with optimizations and performance results. The study concludes with suggestions for reducing host synchronization overhead, improving inter-block communication, and optimizing block scheduling. (500 characters)
E N D
Weak Execution Ordering - Exploiting IterativeMethods on Many-Core GPUs Jianmin Chen, Zhuo Huang, Feiqi Su, Jih-Kwon Peir and Jeff Ho University of Florida Lu Peng Louisiana State University
Outline CUDA review & Inter-Block communication and synchronization Host synchronization overhead Applications with iterative PDE solver Optimizations on inter-block communication Performance results Conclusion
CUDA Programming Model Host invoke Kernels/Grids to execute on GPU Kernel/Grid Blocks Threads Thread Application Host execution kernel 0 Block 0 Block 1 Block 2 Block 3 ... ... ... ... Host execution kernel 1 Block 0 Block N … … ... ... …
CUDA GPU Architecture • Blocks assigned to Stream Multiprocessors (SM) composed of 8 Stream processors (SP) and Shared (local) Memory. • Block synchronization must through Host! No synch. among blocks! Block 58 Block 59 Num. of blocks limited by resources Scheduler: WaitingBlocks GPU SM 0 SM 29 Block 0 … … … SP SP Block 60 SP SP Block 61 SP SP Blocks can communicate through GM Data lost when return to host Block 1 SP SP … … Shared Memory Interconnect Network Block N Global Memory
Example: Breath First Search (BFS) Given G(V,E) source (S), compute steps to reach all other nodes. Each thread compute one node Initially all inactive except source node If activated, visit it and activate its unvisited neighbors n-1 steps needed to reach nodes visited in nth iteration Keep iterating until no active node Synchronization needed after each Iteration Inactive Active Visited 1st Iteration S S S C C C A A A B B B 2nd Iteration D D D E E E 3rd Iteration; Done
No-Host vs. Host Synchronization Limit number of nodes to fit in 1 Block – for avoiding host synchronization Host-sync can be replaced by __syncthreads() Avoid multiple kernel initiation overhead Data can stay in shared memory to reduce global accesses for save/restore Reduce intermediate partial data transfer or termination flag to host during host synchronization
No-Host/Host Result • Graph generated by GTgraph with 3K nodes • No-host uses __syncthreads() in each iteration 67% Host overhead
Applications with Iterative PDE solver Partial Differential Equation solver are widely used Weak execution ordering / chaotic PDE using iterative methods Accuracy of the solver is NOT critical Poisson Image Editing 3D Shape from Shading
Newx,y= f(Oldx-1,y, Oldx,y-1, Oldx,y+1, Oldx+1,y) Each block computes a sub-grid. Nodes from neighboring blocks needed for computing boundary nodes Host synchronization: Go back to host after each iteration But, no exact order needed! Basic 3D-Shape in CUDA Block2 Shared Mem . . . . Grid in Global Memory . . . . . . . . . . . . . . . . . . Block 0 Block 1 Block 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block 3 Block 4 Block 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block5 Shared Mem
Coarse Synchronization Host synchronization every (n ) iterations Inter-block communicate through GM with neighbor blocks for updated boundary nodes Block2 Shared Mem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Block5 Shared Mem
Coarse vs. Fine Host Synchronization Coarse synchronization Less synchronization overhead, but Need more iterations to converge due to imprecise boundary updates through inter-block comm. Reduce inter-block communication overhead Overlap communication with computation Neighbor communicate: upper/lower vs. 4 neighbors Blocks scheduling strategy: square vs. stripe
Overlap Communication, Computation Separate communication threads to overlap with computation No precise order is needed
Overlap Communication with Computation • Communication frequency: • Execution Time = Time/Iteration * Number of Iteration 13
Neighbor Communication Only communicate upper and lower neighbors Less data communication through global memory Coalesced memory moves Incomplete data communication slower in convergence Communicate with all four neighbors More and uncoalesced data moves May converge faster
Blocks Scheduling Blocks scheduled in groups due to limited resources. No updated data from inactive blocks. Try to minimize boundary nodes of the whole group Stripe scheduling Square scheduling
Conclusion Inter-block synchronization Not supported on GPU Significant impact on asynchronous PDE solvers Coarse synchronization and optimizations to improve the overall Performance Separate communication threads to overlap computation Block scheduling and inter-block communication Speedup of 4-5 times compared with fine-granularity host synchronization
Thank You!! Questions?