310 likes | 342 Views
Learn about GPU architecture, software representation, optimizations, and programming models for harnessing massively parallel processors. Explore memory, computation, and data transfer optimizations to maximize GPU performance.
E N D
EECE571R -- Harnessing Massively Parallel Processorshttp://www.ece.ubc.ca/~matei/EECE571/Lecture 1: Introduction to GPU ProgrammingBy Samer Al-Kiswany Acknowledgement: some slides borrowed from presentations by Kayvon Fatahalian, and Mark Harris
Outline Hardware Software Programming Model Optimizations
GPU Architecture Multiprocessor 1 Shared Memory Instruction Unit Registers Registers Registers Processor 1 Processor 2 Processor M Host Machine Multiprocessor N GPU Multiprocessor 2 Host Constant Memory Texture Memory Global Memory
GPU Architecture • SIMD Architecture. • Four memories. • Device (a.k.a. global) • slow – 400-600 cycles access latency • large – 256MB – 1GB • Shared • fast– 4 cycles access latency • small – 16KB • Texture – read only • Constant – read only
GPU Architecture – Program Flow 1 2 4 5 1 2 3 4 5 TPreprocesing + TDataHtoG + TProcessing + TPostProc + TDataGtoH • Preprocessing • Data transfer in • GPU Processing • Data transfer out • Postprocessing 3 TTotal =
Outline Hardware Software Programming Model Optimizations
GPU Programming Model Programming Model: Software representation of the Hardware
GPU Programming Model Block Kernel: A function on the grid
GPU Programming Model In reality scheduling granularity is a warp (32 threads) 4 cycles to complete a single instruction by a warp
GPU Programming Model • In reality scheduling granularity is a warp (32 threads) 4 cycles to complete a single instruction by a warp • Threads in a Block can share stat through shared memory • Threads in the Block can synchronies • Global atomic operations
Outline Hardware Software Programming Model Optimizations
Optimizations Can be roughly categorized into the following categories: Memory Related Computation Related Data Transfer Related
Optimizations - Memory Use shared memory Use texture (1D, 2D, or 3D) and constant memory Avoid shared memory bank conflicts Coalesced memory access (one approach: padding)
Optimizations - Memory Bank 0 Bank 1 . . . Bank 15 Bank 0 0 4 bytes Bank 1 4 bytes 4 8 4 bytes Bank 2 4 bytes 16 . . . Shared Memory Complications Shared memory is organized into 16 -1KB banks. Complication I : Concurrent accesses to the same bank will be serialized (bank conflict) slow down. Tip : Assign different threads to different banks. Complication II : Banks are interleaved.
Optimizations - Memory Global Memory Coalesced Access
Optimizations - Memory Global Memory Non-Coalesced Access
Optimizations Can be roughly categorized into the following categories: Memory Related Computation Related Data Transfer Related
Optimizations - Computation Use 1000s of threads to best use the GPU hardware Use Full Warps (32 threads) (use blocks multiple of 32). Lower code branch divergence. Avoid synchronization Loop unrolling (Less instructions, space for compiler optimizations)
Optimizations Can be roughly categorized into the following categories: Memory Related Computation Related Data Transfer Related
Optimizations – Data Transfer Reduce amount of data transferred between host and GPU Hide transfer overhead through overlapping transfer and computation (Asynchronous transfer)
Summary GPUs are highly parallel devices. Easy to program for (functionality). Hard to optimize for (performance). Optimization: Many optimization, but often you do not need them all (Iteration of profiling and optimization) May bring hard tradeoffs (More coputation vs. less memory, more computation vs. better memory access, ..etc).