280 likes | 379 Views
NVIDIA Kepler Architecture. Paul Bissonnette Rizwan Mohiuddin Ajith Herga. Compute Unified Device Architecture. Hybrid CPU/GPU Code Low latency code is run on CPU Result immediately available High latency, high throughput code is run on GPU Result on bus
E N D
NVIDIA Kepler Architecture Paul Bissonnette RizwanMohiuddin AjithHerga
Compute Unified Device Architecture • Hybrid CPU/GPU Code • Low latency code is run on CPU • Result immediately available • High latency, high throughput code is run on GPU • Result on bus • GPU has many more cores than CPU
CPU/GPU Code CUDA Program GPU Routines CPU Routines NVCC GCC Linker CUDA Binary GPU Object CPU Object
Execution Model (Overview) Result CPU GPU CPU GPU RPC RPC Intermediate Result Result CPU GPU
Execution Model (GPU) Thread Grid Thread Block Thread Block Thread Block Thread Thread Thread Streaming Multiple Processor Graphics Card
Execution Model (GPU) • Each procedure runs as a “kernel” • An instance of a kernel runs on a thread block • A thread block executes on a single streaming multiple processor • All instances of a particular kernel form a thread grid • A thread grid executes on a single graphics card across several streaming multiple processors
Thread Cooperatively • Multiple levels of sharing • Thread blocks similar to MPI group
GPU Execution of Kernels • In Kepler threads can spawn new thread blocks/grids • Less time spent in CPU • More natural recursion • Completion dependent on child grids
CUDA Languages • CUDA C/C++ and CUDA Fortran • Scientific computing • Highly parallel applications • NVIDIA specific (unlike OpenCL) • Specialized for specific tasks • Highly optimized single precision floating point • Specialized data sharing instructions within thread blocks
HYPER Q Without HYPER Q: Availability of only one work queue thus can receive work only from one queue. Difficult for a CPU core to keep a GPU busy.
Using HYPER Q: • Allows connection from multiple CUDA streams, Message Passing Interface (MPI) processes, or multiple threads of the same process. • 32 concurrent work queues, can receive work from 32 process cores at the same time. • 3X Performance increase on Fermi
Dynamic Parallelism • Without Dynamic Parallelism • Data travels back and forth between the CPU and GPU many times. • This is because of the inability of the GPU to create more work on itself depending on the data.
With Dynamic Parallelism: • GPU can generate work on itself based intermediate results, without involvement of CPU. • Permits Dynamic Run Time decisions. • Leaves the CPU free to do other work, conserves power.
Application Example: Quick Sort Computation Streams spawning Streams CPU launches quicksort
CPU-GPU Stack Exchange Runs on CPU Looping based on intermediate results Check if GPU has returned any more intermediate results CPU spawns a stream to be computed on GPU
References • NVIDIA Whitepapers • http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf • http://developer.download.nvidia.com/assets/cuda/files/CUDADownloads/TechBrief_Dynamic_Parallelism_in_CUDA.pdf • NVIDIA Keynote Presentation • http://www.youtube.com/watch?v=TxtZwW2Lf-w • Georgia Tech Presentation • http://www.cc.gatech.edu/~vetter/keeneland/tutorial-2011-04-14/02-cuda-overview.pdf • http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last/4 • http://gpuscience.com/code-examples/tesla-k20-gpu-quicksort-with-dynamic-parallelism