1 / 29

Understanding the Tradeoffs between Software-Managed vs. Hardware-Managed Caches in GPUs

Understanding the Tradeoffs between Software-Managed vs. Hardware-Managed Caches in GPUs. Chao Li Yi Yang HongwenDai North Carolina State University NEC Lab North Carolina State University Shenggen Yan Frank Mueller Huiyang Zhou

darice
Download Presentation

Understanding the Tradeoffs between Software-Managed vs. Hardware-Managed Caches in GPUs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Understanding the Tradeoffs between Software-Managed vs. Hardware-Managed Caches in GPUs Chao Li Yi Yang HongwenDai North Carolina State University NEC Lab North Carolina State University Shenggen Yan Frank Mueller Huiyang Zhou Institute of Softare, CAS North Carolina State University North Carolina State University

  2. Introduction • Two ways to manage on-chip caches effectively • Explicitsoftware management • Shared Memory in GPU; • Near Memory in MIC KNL; • Implicit hardware management • L1 Data Cache (L1 D-cache). • Accelerators like GPUs provide an ideal platform • Shared the same hardware resources • Configured using runtime API • Insight to two questions: • Is it worthwhilefor application developers to explicitly manage shared memory given existence of the hardware managed L1 D-caches in GPUs? • What are the main reasonsfor code utilizing shared memory to outperform code leveraging L1 D-caches (and vice versa)?

  3. Outline • Four Detailed Case Studies • Matrix Multiplication • FFT • Marching Cubes • Pathfinder • Benchmarks Categorization and Experimental Results • Conclusions

  4. Matrix-Multiplication … Matrix A One Tile P E P E P E P E P E P E L1 D-Cache Shared Memory L2 Cache Matrix B DRAM Matrix C = A * B

  5. Matrix-Multiplication … Matrix A A Tile P E P E P E P E P E P E Shared Memory Version VS D-cache Version: 1) Same tiling optimization 2) Similar cache performance (95% hit ratio) 3) Same hardware, similar latency L1 D-Cache Shared Memory Surprisingly D-cache version is 43.8% slower than shared memory version L2 Cache Matrix B DRAM Matrix C = A * B

  6. Matrix-Multiplication Software_managed Cache Version (i.e. Shared Memory Version) Hardware_managed Cache Version (i.e. D-Cache Version) • Code for (each thread block) { __shared__ float As[BLOCK_SIZE][BLOCK_SIZE]; __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE]; AS(ty, tx) = A [a + WA * ty + tx]; BS(ty, tx) = B [b + WB * ty + tx]; __syncthreads(); #pragma unroll for (int k = 0; k < BLOCK_SIZE; ++k) Csub += AS(ty, k) * BS(k, tx); __syncthreads(); } for (each thread block) { #pragma unroll for (int k = 0; k < BLOCK_SIZE; ++k) { Csub += A[a+WA*ty+k] * B[b+k*WB+tx]; } } tx ty Data movement Overhead On-chip AS, BS Same Tiling • Configuration • Capacity: 48KB for both Caches; Matrix size: 256*256; Tile size: 16*16. Thread block(16,16); TB Number: 5 for both versions. 112.2% slower 43.8% slower

  7. __global__ void Micro_sm(float*D1,float*D2,int iter, int initer,int stride) { int i,j; float temp=0; __shared__ float sh[32]; for(j=0;j<iter;j++) { sh[threadIdx.x]=D1[threadIdx.x]; if(threadIdx.x==0) { for(i=0;i<initer;i++) { temp+=sh[i]; } } D1+= stride; } D2[0]=temp; } Shared Memory Version __global__ void Micro_L1(float*D1,float*D2,int iter, int initer,int stride) { int i,j; float temp=0; for(j=0;j<iter;j++) { temp=D1[threadIdx.x]; if(threadIdx.x==0) { for(i=0;i<initer;i++) { temp+=D1[i]; } } D1+= stride; } D2[0]=temp; } D-Cache Version Dynamic Instruction Count: Shared-memory version has 12.7% more instructions than D-Cache version Perfect L1 Cache: Shared-memory version is still faster than D-cache version (0.2%) Micro-Benchmark: Is it possible for D-cache version to outperform shared-memory version?

  8. Dynamic Instructions : Shared-memory Version has 12.7% more instructions than the D-Cache Version Perfect L1 Cache: Shared-memory version is still faster than D-cache Version.(0.2%) Micro-Benchmark: D-cache version is 13.0% faster than shared-memory version • Cache Configuration: • Cache associativity: From 6-way set assoc to fully assoc: • Miss rate: 3.59MPKI to 1.79 MPKI; • Performance improved by 12.8%; Still 88.7% slower than shared-mem Version • 2) Cache capacity: 16kB vs 48kB, No impact What happened? Not Total Number of Misses, But how cache misses overlap with each other, i.e. MLP

  9. for (int k = 0; k < BLOCK_SIZE; ++k) { Csub += A[a+WA*ty+k] * B[b+k*WB+tx]; }

  10. A[a+WA*ty+k] B[b+k*WB+tx] K=0 Cache Miss Tx: 0~15 Ty: 0~1 A[a+WA*0+0] Warp 0 B[b+0*WB+ 0~15] A[a+WA*1+0] Cache hit Warp 1 Tx: 0~15 Ty: 2~3 A[a+WA*2+0] B[b+0*WB+ 0~15] A[a+WA*3+0] Warp 2 A[a+WA*4+0] Tx: 0~15 Ty: 4~5 B[b+0*WB+ 0~15] A[a+WA*5+0] ... Warp 7 A[a+WA*14+0] Tx: 0~15 Ty: 14~15 B[b+0*WB+ 0~15] A[a+WA*15+0] 16 Cache Misses 1 Cache Miss = 17 Cache Misses

  11. A[a+WA*ty+k] B[b+k*WB+tx] K=0 K=1 LD A LD B Tx: 0~15 Ty: 0~1 A[a+WA*0+1] Warp 0 B[b+1*WB+ 0~15] A[a+WA*1+1] Warp 1 Tx: 0~15 Ty: 2~3 17 misses A[a+WA*2+1] B[b+1*WB+ 0~15] A[a+WA*3+1] Warp 2 A[a+WA*4+1] Tx: 0~15 Ty: 4~5 B[b+1*WB+ 0~15] A[a+WA*5+1] ... Warp 7 A[a+WA*14+1] Tx: 0~15 Ty: 14~15 B[b+1*WB+ 0~15] A[a+WA*15+1] 0 Cache Miss 1 Cache Miss = 1 Cache Miss

  12. A[a+WA*ty+k] B[b+k*WB+tx] K=0 K=1 K=2 LD A LD B Tx: 0~15 Ty: 0~1 A[a+WA*0+1] Warp 0 B[b+1*WB+ 0~15] A[a+WA*1+1] 17 misses Warp 1 Tx: 0~15 Ty: 2~3 A[a+WA*2+1] B[b+1*WB+ 0~15] A[a+WA*3+1] LD A LD B Warp 2 A[a+WA*4+1] Tx: 0~15 Ty: 2~3 B[b+1*WB+ 0~15] A[a+WA*5+1] 1 miss ... Warp 7 A[a+WA*14+1] Tx: 0~15 Ty: 2~3 B[b+1*WB+ 0~15] A[a+WA*15+1] 1 Cache Miss = 1 Cache Miss 0 Cache Miss

  13. K=0 K=1 K=2 LD A LD B 1 miss LD A LD B 17 misses 1 miss LD A LD B

  14. K=1 K=2 K=0 K=3 K=15 LD A LD B 1 miss LD A LD B 17 misses 1 miss LD A LD B 1 miss LD A LD B … 1 miss LD A LD B

  15. AS(ty, tx) = A [a + WA * ty + tx]; BS(ty, tx) = B [b + WB * ty + tx]; _Sync(); ld A Tx: 0~15 Ty: 0~1 A[a+WA*0+0] Warp 0 A[a+WA*1+0] Warp 1 Tx: 0~15 Ty: 2~3 A[a+WA*2+0] A[a+WA*3+0] A[a+WA*4+0] Tx: 0~15 Ty: 4~5 Warp 2 A[a+WA*5+0] ... ... Warp 7 A[a+WA*4+0] Tx: 0~15 Ty: 14~15 A[a+WA*5+0] 16 Cache Misses = 16 Cache Misses

  16. AS(ty, tx) = A [a + WA * ty + tx]; BS(ty, tx) = B [b + WB * ty + tx]; _Sync(); ld A stsA ld B 16 misses 16 misses Tx: 0~15 Ty: 0~1 B[a+WA*0+0] Warp 0 B[a+WA*1+0] Warp 1 Tx: 0~15 Ty: 2~3 B[a+WA*2+0] B[a+WA*3+0] Warp 2 B[a+WA*4+0] Tx: 0~15 Ty: 2~3 B[a+WA*5+0] ... ... Warp 7 B[a+WA*14+0] Tx: 0~15 Ty: 2~3 B[a+WA*15+0] 16 Cache Misses = 16 Cache Misses

  17. ... K=0 K=1 K=15 ld A stsA ld B 16 misses Sync 16 misses lds a lds b fma lds a lds b fma ... lds a lds b fma

  18. K=15 K=1 K=2 K=0 K=3 ld A ld B Low MLP 17 misses ld A ld B 1 miss ld A ld B High MLP … ld A ld B 1 miss 1 miss ld A ld B D-Cache Version 1 miss ... K=15 K=1 K=0 Cycles Reduced ld A stsA ld B 16 misses lds a lds b fma 16 misses lds a lds b fma High MLP ... lds a lds b fma Shared-Memory Version

  19. Short Summary • Matrix Multiplication (Shared Memory Version ) High Memory Level Parallelism for Shared Memory Version More Dynamic Instructions for Shared Memory Version • Fast Fourier Transformation (Shared Memory Version ) Write Evict for D-Cache Version Uncoalesced Memory Access for D-Cache Version • MarchingCubes (D-Cache Version ) High Thread Level Parallelism for D-cache Version • PathFinder (D-Cache Version ) More Opportunities to Store Data into Registers for D-Cache Version

  20. Experimental Methodology • Experimental Setup: • Real Hardware: GTX480 (FERMI) and GTX680 (KEPLER) • Simulator : GPGPUsim V3.2.1. Simulator Configuration

  21. Benchmarks • 16 GPGPU Workloads • Cover typical on-chip memory usages.

  22. Performance Impact: FERMI 55.7% Performance on GTX480

  23. Performance Impact: KEPLER Performance on GTX680

  24. Performance Impact: GPGPUsim 27% Performance on GPGPUsim

  25. Performance Impact: GPGPUsim 27% Performance on GPGPUsim

  26. Energy Impact: GPUWattch • On average, shared memory versions consume 48.5% energy of D-cache versions with WE, 53.7% of D-cache versions with WBWA, and 71.9% of D-cache versions with WBWA and FA policy

  27. Conclusion • An In-depth study on interesting tradeoffs between software-managed caches and hardware-managed caches. • Key reasons for shared memory versions to outperform D-cache versions • Memory Level Parallelism • Memory Coalescing (bank-conflict-free accesses) • D-Cache versions mainly benefit from • Improved thread-level parallelism • Use registers to store variables • Shared memory versions outperform D-cache versions and consume less energy for most of the benchmarks, justifying the software complexity to manage the shared memory.

  28. Thank You! Questions?

  29. for (int a = aBegin, b = bBegin; a <= aEnd; a += aStep, b += bStep) { prefetch(A[a + WA * ty + tx]); //Prefetch the tile of matrix A prefetch(B[b + WB * ty + tx]); //Prefetch the tile of matrix B __syncthreads(); #pragma unroll for (int k = 0; k < BLOCK_SIZE; ++k) { Csub += A[a+WA*ty+k]*B[b+k*WB+tx]; // tx = threadIdx.x and ty = threadIdx.y } } Figure. The D-cache version of matrix multiplication with prefetching instructions to improve MLP. • Similar complexity compared to explicitly data management • Subtle MLP impact is not obvious, non-intuitive to engage in a prefetching optimization • Even with prefetching instructions, the code is still 8% lower than shared memory version.

More Related