1 / 48

Optimization on Kepler

Optimization on Kepler. Zehuan Wang zehuan@nvidia.com. Fundamental Optimization. Optimization Overview. GPU a rchitecture Kernel optimization Memory optimization Latency optimization Instruction optimization CPU-GPU interaction optimization Overlapped execution using streams.

chinara
Download Presentation

Optimization on Kepler

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimization on Kepler Zehuan Wang zehuan@nvidia.com

  2. Fundamental Optimization

  3. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  4. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  5. GPU High Level View SMEM SMEM SMEM SMEM PCIe Global Memory CPU Chipset Streaming Multiprocessor Global memory

  6. GK110 SM • Control unit • 4 Warp Scheduler • 8 instruction dispatcher • Execution unit • 192 single-precision CUDA Cores • 64 double-precision CUDA Cores • 32 SFU, 32 LD/ST • Memory • Registers: 64K 32-bit • Cache • L1+shared memory (64 KB) • Texture • Constant

  7. ... GPU and Programming Model Software GPU Threads are executed by CUDA cores CUDA Core Thread Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on one multiprocessor - limited by multiprocessor resources Thread Block Multiprocessor A kernel is launched as a grid of thread blocks Up to 16 kernels can execute on a device at one time Grid Device

  8. ... Block 0 Block 0 Warp1 (32~63) Warp2 (64~95) Warp 3 (96~127) Warp 5 (160) Warp 0 (0~31) Warp 0 (0~31) Warp2 (64~95) Warp 3 (96~127) Warp 4 (128~159) Warp 4 (128~159) Warp1 (32~63) Warp • Warp is successive 32 threads in a block • E.g. blockDim = 160 • Automatically divided to 5 warps by GPU • E.g. blockDim = 161 • If the blockDim is not the Multiple of 32The rest of thread will occupy one more warp 32 Threads = 32 Threads 32 Threads Block Warps

  9. warp 8 instruction 11 warp 9 instruction 11 warp 15 instruction 96 warp 2 instruction 42 warp 3 instruction 33 warp 2 instruction 43 warp 15 instruction 95 warp 14 instruction 95 warp 8 instruction 12 warp 9 instruction 12 warp 14 instruction 96 warp 3 instruction 34 Warp • SIMD: Same Instruction Multi Data • The threads in the same warp always executing the same instruction • Instructions will be issued tooperation units by warp Warp Scheduler 1 Warp Scheduler 0 ... ...

  10. warp 8 instruction 11 warp 9 instruction 11 warp 15 instruction 96 warp 2 instruction 42 warp 3 instruction 33 warp 2 instruction 43 warp 15 instruction 95 warp 14 instruction 95 warp 8 instruction 12 warp 9 instruction 12 warp 14 instruction 96 warp 3 instruction 34 Warp • Latency is caused by the dependencybetween the neighbor instructions in the same warp • In the waiting time, other instructionsfrom other warps can be executed • Context switching is free • A lot of warps can hide memory latency Warp Scheduler 1 Warp Scheduler 0 ... ...

  11. KeplerMemory Hierarchy • Register • Spills to local memory • Caches • Shared memory • L1 cache • L2 cache • Constant cache • Texture cache • Global memory

  12. Kepler/Fermi Memory Hierarchy SM-0 SM-1 SM-N Registers Registers Registers fast kepler L1&SMEM L1&SMEM L1&SMEM C C C TEX TEX TEX low L2 Global Memory

  13. Wrong View To Optimization • Try all the optimization methods in book • Optimization is endless Low Efficiency

  14. General Optimization Strategies: Measurement • Find out the limiting factor in kernel performance • Memory bandwidth bound (memory optimization) • Instruction throughput bound (instruction optimization) • Latency bound (configuration optimization) • Measure effective memory/instruction throughput: NVIDIA Visual Profiler

  15. Resolved Find Limiter Memory bound Instruction bound Latency bound Compare to Effective Value GB/s Compare to Effective Valueinst/s ~ << << ~ Memory optimization Instruction optimization Configuration optimization Done!

  16. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  17. Memory Optimization • If the code is memory-bound and effective memory throughput is much lower than the peak • Purpose: access only data that are absolutely necessary • Major techniques • Improve access pattern to reduce wasted transactions • Reduce redundant access: read-only cache, shared memory

  18. Reduce Wasted Transactions • Memory accesses are per warp • Memory is accessed in discrete chunks • L1 is reserved only for register spills and stack data in Kepler • Go directly to L2 (invalidate line in L1), on L2 miss go to DRAM • Memory is transport in segments = 32 B (same as for writes) • If a warp can’t take use all of the data in the segments, the rest memory transaction is wasted.

  19. Kepler/Fermi Memory Hierarchy SM-0 SM-1 SM-N Registers Registers Registers fast kepler L1&SMEM L1&SMEM L1&SMEM C C C TEX TEX TEX low L2 Global Memory

  20. Reduce Wasted Transactions • Scenario: • Warp requests 32 aligned, consecutive 4-byte words • Addresses fall within 4 segments • No replays • Bus utilization: 100% • Warp needs 128 bytes • 128 bytes move across the bus on a miss addresses from a warp ... 224 256 288 384 416 448 32 64 96 128 160 192 320 352 0 Memory addresses

  21. Reduce Wasted Transactions • Scenario: • Warp requests 32 aligned, permuted 4-byte words • Addresses fall within 4 segments • No replays • Bus utilization: 100% • Warp needs 128 bytes • 128 bytes move across the bus on a miss addresses from a warp ... 224 256 288 384 416 448 32 64 96 128 160 192 320 352 0 Memory addresses

  22. Reduce Wasted Transactions • Scenario: • Warp requests 32 consecutive 4-byte words, offset from perfect alignment • Addresses fall within at most 5 segments • 1 replay (2 transactions) • Bus utilization: at least 80% • Warp needs 128 bytes • At most 160 bytes move across the bus • Some misaligned patterns will fall within 4 segments, so 100% utilization addresses from a warp ... 224 256 288 384 416 448 32 64 96 128 160 192 320 352 0 Memory addresses

  23. Reduce Wasted Transactions • Scenario: • All threads in a warp request the same 4-byte word • Addresses fall within a single segment • No replays • Bus utilization: 12.5% • Warp needs 4 bytes • 32 bytes move across the bus on a miss addresses from a warp ... 224 256 288 384 416 448 32 64 96 128 160 192 320 352 0 Memory addresses

  24. Reduce Wasted Transactions • Scenario: • Warp requests 32 scattered 4-byte words • Addresses fall within N segments • (N-1) replays (N transactions) • Could be lower some segments can be arranged into a single transaction • Bus utilization: 128 / (N*32) (4x higher than caching loads) • Warp needs 128 bytes • N*32 bytes move across the bus on a miss addresses from a warp ... 224 256 288 384 416 448 32 64 96 128 160 192 320 352 0 Memory addresses

  25. Read-Only Cache • An alternative to L1 when accessing DRAM • Also known as texture cache: all texture accesses use this cache • CC 3.5 and higher also enable global memory accesses • Should not be used if a kernel reads and writes to the same addresses • Comparing to L1: • Generally better for scattered reads than L1 • Caching is at 32 B granularity (L1, when caching operates at 128 B granularity) • Does not require replay for multiple transactions (L1 does) • Higher latency than L1 reads, also tends to increase register use

  26. Read-Only Cache • Annotate eligible kernel parameters withconst __restrict • Compiler will automatically map loads to use read-only data cache path __global__ void saxpy(float x, float y,constfloat * __restrict input, float * output) { size_t offset = threadIdx.x + (blockIdx.x * blockDim.x); // Compiler will automatically use texture // for "input" output[offset] = (input[offset] * x) + y; }

  27. Shared Memory • Low latency: a few cycles • High throughput • Main use • Inter-block communication • User-managed cache to reduce redundant global memory accesses • Avoid non-coalesced access

  28. Shared Memory Example: Matrix Multiplication B C=AxB A C Every thread corresponds to one entry in C.

  29. Naive Kernel __global__ void simpleMultiply(float* a, float* b, float* c, int N){int row = threadIdx.x + blockIdx.x*blockDim.x;int col = threadIdx.y + blockIdx.y*blockDim.y; float sum = 0.0f; for (int i = 0; i < N; i++) { sum += a[row*N+i] * b[i*N+col]; } c[row*N+col] = sum;} Every thread corresponds to one entry in C.

  30. Blocked Matrix Multiplication B C=AxB A C Data reuse in the blocked version

  31. Blocked and cached kernel __global__ void coalescedMultiply(double*a, double* b, double*c,int N){__shared__ double aTile[TILE_DIM][TILE_DIM]; __shared__ double bTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y;intcol = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int k = 0; k < N; k += TILE_DIM) { aTile[threadIdx.y][threadIdx.x] = a[row*N+threadIdx.x+k]; bTile[threadIdx.y][threadIdx.x] = b[(threadIdx.y+k)*N+col]; __syncthreads(); for (inti = 0; i < TILE_DIM; i++) {sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x];} c[row*N+col] = sum;}

  32. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  33. Latency Optimization • When the code is latency bound • Both the memory and instruction throughputs are far from the peak • Latency hiding: switching threads • A thread blocks when one of the operands isn’t ready • Purpose: have enough warps to hide latency • Major techniques: increase active warps

  34. Enough Block and Block Size • # of blocks >> # of SM > 100 to scale well to future device • Minimum: 64. I generally use 128 or 256. But use whatever is best for your app. • Depends on the problem, do experiments!

  35. Occupancy & Active Warps • Occupancy: ratio of active warps per SM to the maximum number of allowed warps • Maximum number: 48 in Fermi • We need the occupancy to be high enough to hide latency • Occupancy is limited by resource usage

  36. Dynamical Partitioning of SM Resources • Shared memory is partitioned among blocks • Registers are partitioned among threads: <= 255 • Thread block slots: <= 16 • Thread slots: <= 2048 • Any of those can be the limiting factor on how many threads can be launched at the same time on a SM

  37. Occupancy Calculator http://developer.download.nvidia.com/compute/cuda/CUDA_Occupancy_calculator.xls

  38. Occupancy Optimizations • Know the current occupancy • Visual profiler • --ptxas-options=-v: output resource usage info; input to Occupancy Calculator • Adjust resource usage to increase occupancy • Change block size • Limit register usage • Compiler option –maxrregcount=n: per file • __launch_bounds__: per kernel • Dynamical allocating shared memory

  39. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  40. Instruction Optimization • If you find out the code is instruction bound • Compute-intensive algorithm can easily become memory-bound if not careful enough • Typically, worry about instruction optimization after memory and execution configuration optimizations • Purpose: reduce instruction count • Use less instructions to get the same job done • Major techniques • Use high throughput instructions • Reduce wasted instructions: branch divergence, etc.

  41. Reduce Instruction Count • Use float if precision allow • Adding “f” to floating literals (e.g. 1.0f) because the default is double • Fast math functions • Two types of runtime math library functions • func(): slower but higher accuracy (5 ulp or less) • __func(): fast but lower accuracy (see prog. guide for full details) • -use_fast_math: forces every func() to __func ()

  42. Control Flow • Divergent branches: • Threads within a single warp take different paths • Example with divergence: • if (threadIdx.x > 2) {...} else {...} • Branch granularity < warp size • Different execution paths within a warp are serialized • Different warps can execute different code with no impact on performance • Avoid diverging within a warp • Example without divergence: • if (threadIdx.x / WARP_SIZE > 2) {...} else {...} • Branch granularity is a whole multiple of warp size

  43. Kernel Optimization Workflow Find Limiter Memory bound Instruction bound Latency bound Compare to peak GB/s Compare to peak inst/s ~ << << ~ Memory optimization Instruction optimization Configuration optimization Done!

  44. Optimization Overview • GPU architecture • Kernel optimization • Memory optimization • Latency optimization • Instruction optimization • CPU-GPU interaction optimization • Overlapped execution using streams

  45. Minimizing CPU-GPU data transfer • Host<->device data transfer has much lower bandwidth than global memory access. • 16 GB/s (PCIe x16 Gen3) vs250 GB/s & 3.95Tinst/s (GK110) • Minimize transfer • Intermediate data can be allocated, operated, de-allocated directly on GPU • Sometimes it’s even better to recompute on GPU • Group transfer • One large transfer much better than many small ones • Overlap memory transfer with computation

  46. Streams and Async API • Default API: • Kernel launches are asynchronous with CPU • Memcopies (D2H, H2D) block CPU thread • CUDA calls are serialized by the driver • Streams and async functions provide: • Memcopies (D2H, H2D) asynchronous with CPU • Ability to concurrently execute a kernel and a memcopy • Concurrent kernel in Fermi • Stream = sequence of operations that execute in issue-order on GPU • Operations from different streams can be interleaved • A kernel and memcopy from different streams can be overlapped

  47. Pinned (non-pageable) memory • Pinned memory enables: • memcopies asynchronous with CPU & GPU • Usage • cudaHostAlloc / cudaFreeHostinstead of malloc / free • Note: • pinned memory is essentially removed from virtual memory • cudaHostAlloc is typically very expensive

  48. Overlap kernel and memory copy • Requirements: • D2H or H2D memcopy from pinned memory • Device with compute capability ≥ 1.1 (G84 and later) • Kernel and memcopy in different, non-0 streams • Code: cudaStream_tstream1, stream2; cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync( dst, src, size, dir, stream1 ); kernel<<<grid, block, 0, stream2>>>(…); potentially overlapped

More Related