1 / 24

ECE408 Fall 2013 Applied Parallel Programming Lecture 20 and 21: GPU System Architecture

ECE408 Fall 2013 Applied Parallel Programming Lecture 20 and 21: GPU System Architecture. Bandwidth – Gravity of Modern Computer Systems. The Bandwidth between key components ultimately dictates system performance

veta
Download Presentation

ECE408 Fall 2013 Applied Parallel Programming Lecture 20 and 21: GPU System Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE408 Fall 2013Applied Parallel ProgrammingLecture 20 and 21: GPU System Architecture

  2. Bandwidth – Gravity of Modern Computer Systems • The Bandwidth between key components ultimately dictates system performance • Especially true for massively parallel systems processing massive amount of data • Tricks like buffering, reordering, caching can temporarily defy the rules in some cases • Ultimately, the performance falls back to what the “speeds and feeds” dictate

  3. PCIe PC Architecture • PCIe forms the interconnect backbone • Northbridge/Southbridge are both PCIe switches • Some Southbridge designs have built-in PCI-PCIe bridge to allow old PCI cards • Some PCIe I/O cards are PCI cards with a PCI-PCIe bridge • Source: Jon Stokes, PCI Express: An Overview • http://arstechnica.com/articles/paedia/hardware/pcie.ars

  4. GeForce 7800 GTXBoard Details SLI Connector Single slot cooling sVideo TV Out DVI x 2 • 256MB/256-bit DDR3 • 600 MHz • 8 pieces of 8Mx32 16x PCI-Express

  5. PCIe Data Transfer using DMA • DMA (Direct Memory Access) is used to fully utilize the bandwidth of an I/O bus • DMA uses physical address for source and destination • Transfers a number of bytes requested by OS • Needs pinned memory Main Memory (DRAM) CPU GPU card (or other I/O cards) Global Memory DMA

  6. Pinned Memory • DMA uses physical addresses • The OS could accidentally page out the data that is being read or written by a DMA and page in another virtual page into the same location • Pinned memory cannot not be paged out • If a source or destination of a cudaMemCpy() in the host memory is not pinned, it needs to be first copied to a pinned memory – extra overhead • cudaMemcpy is much faster with pinned host memory source or destination

  7. Allocate/Free Pinned Memory(a.k.a. Page Locked Memory) • cudaHostAlloc() • Three parameters • Address of pointer to the allocated memory • Size of the allocated memory in bytes • Option – use cudaHostAllocDefault for now • cudaFreeHost() • One parameter • Pointer to the memory to be freed

  8. Using Pinned Memory • Use the allocated memory and its pointer the same way those returned by malloc(); • The only difference is that the allocated memory cannot be paged by the OS • The cudaMemcpy function should be about 2X faster with pinned memory • Pinned memory is a limited resource whose over-subscription can have serious consequences

  9. Serialized Data Transfer and GPU computation • So far, the way we use cudaMemcpy serializes data transfer and GPU computation Trans. A Trans. B Vector Add Tranfer C time Only use one direction, GPU idle Only use one direction, GPU idle PCIe Idle

  10. Overlapped (Pipelined) Timing • Divide large vectors into segments • Overlap transfer and compute of adjacent segments Trans A.1 Comp C.1 = A.1 + B.1 Trans B.1 Trans C.1 Trans A.2 Trans B.2 Comp C.2 = A.2 + B.2 Trans C.2 Trans A.3 Trans B.3 Comp C.3 = A.3 + B.3 Trans A.4 Trans B.4

  11. Using CUDA Streams and Asynchronous MemCpy • CUDA supports parallel execution of kernels and cudaMemcpy with “Streams” • Each stream is a queue of operations (kernel launches and cudaMemcpy’s) • Operations (tasks) in different streams can go in parallel • “Task parallelism”

  12. Streams host thread • Device requests made from the host code are put into a queue • Queue is read and processed asynchronously by the driver and device • Driver ensures that commands in the queue are processed in sequence. Memory copies end before kernel launch, etc. cudaMemcpy Kernel launch sync fifo device driver

  13. Streams cont. host thread • To allow concurrent copying and kernel execution, you need to use multiple queues, called “streams” • CUDA “events” allow the host thread to query and synchronize with the individual queues. Stream 1 Stream 2 Event device driver

  14. Conceptual View of Streams PCIeUP PCIeDown Kernel Engine Copy Engine MemCpy A.1 MemCpy A.2 MemCpy B.1 MemCpy B.2 Kernel 1 Kernel 2 MemCpy C.1 MemCpy C.2 Stream 0 Stream 1 Operations (Kernels, MemCpys)

  15. A Simple Multi-Stream Host Code cudaStream_t stream0, stream1; cudaStreamCreate( &stream0); cudaStreamCreate( &stream1); float *d_A0, *d_B0, *d_C0; // device memory for stream 0 float *d_A1, *d_B1, *d_C1; // device memory for stream 1 // cudaMalloc for d_A0, d_B0, d_C0, d_A1, d_B1, d_C1 go here for (inti=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i; SegSize*sizeof(float),.., stream0); vecAdd<<<SegSize/256, 256, 0, stream0); cudaMemCpyAsync(d_C0, h_C+I; SegSize*sizeof(float),.., stream0);

  16. A Simple Multi-Stream Host Code(Cont.) for (int i=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i; SegSize*sizeof(float),.., stream0); vecAdd<<<SegSize/256, 256, 0, stream0)(d_A0, d_B0, …); cudaMemCpyAsync(d_C0, h_C+I; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_A1, h_A+i+SegSize; SegSize*sizeof(float),.., stream1); cudaMemCpyAsync(d_B1, h_B+i+SegSize; SegSize*sizeof(float),.., stream1); vecAdd<<<SegSize/256, 256, 0, stream1>>>(d_A1, d_B1, …); cudaMemCpyAsync(d_C1, h_C+i+SegSize; SegSize*sizeof(float),.., stream1); }

  17. A View Closer to Reality PCI UP PCI Down Kernel Engine Copy Engine MemCpy A.1 Kernel 1 MemCpy B.1 Kernel 2 MemCpy C.1 MemCpy A.2 MemCpy B.2 MemCpy C.2 Stream 0 Stream 1 Operations (Kernels, MemCpys)

  18. Not quite the overlap we want • C.1 blocks A.2 and B.2 in the copy engine queue Trans A.1 Comp C.1 = A.1 + B.1 Trans B.1 Trans C.1 Trans A.2 Trans B.2 Comp C.2 = A.2 + B.2

  19. A Better Multi-Stream Host Code(Cont.) for (int i=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_A1, h_A+i+SegSize; SegSize*sizeof(float),.., stream1); cudaMemCpyAsync(d_B1, h_B+i+SegSize; SegSize*sizeof(float),.., stream1); vecAdd<<<SegSize/256, 256, 0, stream0)(d_A0, d_B0, …); vecAdd<<<SegSize/256, 256, 0, stream1>>>(d_A1, d_B1, …); cudaMemCpyAsync(d_C0, h_C+I; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_C1, h_C+i+SegSize; SegSize*sizeof(float),.., stream1); }

  20. A View Closer to Reality PCI UP PCI Down Kernel Engine Copy Engine MemCpy A.1 Kernel 1 MemCpy B.1 Kernel 2 MemCpy A.2 MemCpy B.2 MemCpy C.1 MemCpy C.2 Stream 0 Stream 1 Operations (Kernels, MemCpys)

  21. Overlapped (Pipelined) Timing • Divide large vectors into segments • Overlap transfer and compute of adjacent segments Trans A.1 Comp C.1 = A.1 + B.1 Trans B.1 Trans C.1 Trans A.2 Trans B.2 Comp C.2 = A.2 + B.2 Trans C.2 Trans A.3 Trans B.3 Comp C.3 = A.3 + B.3 Trans A.4 Trans B.4

  22. Hyper Queue • Provide multiple real queues for each engine • Allow much more concurrency by allowing some streams to make progress for an engine while others are blocked

  23. Fermi (and older) Concurrency Fermi allows 16-way concurrency • Up to 16 grids can run at once • But CUDA streams multiplex into a single queue • Overlap only at stream edges A -- B -- C Stream 1 P -- Q -- R A--B--C P--Q--R X--Y--Z X -- Y-- Z Stream 2 Hardware Work Queue Stream 3

  24. Kepler Improved Concurrency A--B--C A -- B -- C P--Q--R Stream 1 P -- Q -- R X--Y--Z Stream 2 X -- Y-- Z Multiple Hardware Work Queues Stream 3 Kepler allows 32-way concurrency • One work queue per stream • Concurrency at full-stream level • No inter-stream dependencies

More Related