520 likes | 633 Views
Runtime Data Flow Graph Scheduling of Matrix Computations. Ernie Chan. Teaser. Better. Theoretical Peak Performance. Goals. Programmability Use tools provided by FLAME Parallelism Directed acyclic graph ( DAG) scheduling. Outline. 7. Introduction
E N D
Runtime Data Flow Graph Scheduling of Matrix Computations Ernie Chan
Teaser Better Theoretical Peak Performance Intel MKL talk
Goals • Programmability • Use tools provided by FLAME • Parallelism • Directed acyclic graph (DAG) scheduling Intel MKL talk
Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk
SuperMatrix • Formal Linear Algebra Method Environment (FLAME) • High-level abstractions for expressing linear algebra algorithms • Cholesky Factorization Intel MKL talk
SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 CHOL0 Chol( A0,0 ) Intel MKL talk
SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 TRSM1 TRSM2 CHOL0 Chol( A0,0 ) TRSM1 A1,0A0,0-T TRSM2 A2,0A0,0-T Intel MKL talk
SuperMatrix CHOL0 • Cholesky Factorization • Iteration 1 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL0 Chol( A0,0 ) TRSM1 A1,0 A0,0-T SYRK3 A1,1 – A1,0 A1,0T TRSM2 A2,0 A0,0-T GEMM4 A2,1 – A2,0A1,0T SYRK5 A2,2 – A2,0 A2,0T Intel MKL talk
SuperMatrix CHOL0 • Cholesky Factorization • Iteration 2 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL6 CHOL6 Chol( A1,1 ) TRSM7 TRSM7 A2,1 A1,1-T SYRK8 A2,2 – A2,1 A2,1T SYRK8 Intel MKL talk
SuperMatrix CHOL0 • Cholesky Factorization • Iteration 3 TRSM1 TRSM2 SYRK3 GEMM4 SYRK5 CHOL6 TRSM7 CHOL9 Chol( A2,2 ) SYRK8 CHOL9 Intel MKL talk
SuperMatrix • Cholesky Factorization • matrix of blocks Intel MKL talk
SuperMatrix • Separation of Concerns • Analyzer • Decomposes subproblems into component tasks • Store tasks in global task queue sequentially • Internally calculates all dependencies between tasks, which form a DAG, only using input and output parameters for each task • Dispatcher • Spawn threads • Schedule and dispatch tasks to threads in parallel Intel MKL talk
Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk
Scheduling 7 • Dispatcher foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task Execute task foreach dependent task do Update dependent task if dependent task is ready then Enqueue dependent task end end end 6 5 5 4 3 4 3 2 1 Intel MKL talk
Scheduling 7 • Dispatcher foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task Execute task foreach dependent task do Update dependent task if dependent task is ready then Enqueue dependent task end end end 6 5 5 4 3 4 3 2 1 Intel MKL talk
Scheduling • Supermarket • lines for each cashiers • Efficient enqueue and dequeue • Schedule depends on task to thread assignment • Bank • 1 line for tellers • Enqueue and dequeue become bottlenecks • Dynamic dispatching of tasks to threads Intel MKL talk
Scheduling • Single Queue • Set of all ready and available tasks • FIFO, priority Enqueue Dequeue PE1 PE0 … PEp-1 Intel MKL talk
Scheduling • Multiple Queues • Work stealing, data affinity Enqueue … Dequeue PE1 PE0 … PEp-1 Intel MKL talk
Scheduling • Work Stealing foreach task in DAG do If task is ready then Enqueue task end end while tasks are available do Dequeue task iftask ≠ Ø then Execute task Update dependent tasks … else Stealtask end end • Enqueue • Place all dependent tasks on queue of same thread that executes task • Steal • Select random thread and remove a task from tail of its queue Intel MKL talk
Scheduling • Data Affinity • Assign all tasks that write to a particular block to the same thread • Owner computes rule • 2D block cyclic distribution • Execution Trace • Cholesky factorization: • Total time: 2D data affinity ~ FIFO queue • Idle threads: 2D ≈ 27% and FIFO ≈ 17% 2 0 0 3 1 1 2 0 0 Intel MKL talk
Scheduling • Data Granularity • Cost of task >> enqueue and dequeue • Single vs. Multiple Queues • FIFO queue increases load balance • 2D data affinity decreases data communication • Combine best aspects of both! Intel MKL talk
Scheduling • Cache Affinity • Single priority queue sorted by task height • Software cache • LRU • Line = block • Fully associative Enqueue Dequeue PE1 PE0 … PEp-1 … $0 $1 $p-1 Intel MKL talk
Scheduling • Cache Affinity • Dequeue • Search queue for task with output block in software cache • If found return task • Otherwise return head task • Enqueue • Insert task • Sort queue via task heights • Dispatcher • Update software cache via cache coherency protocol with write invalidation Intel MKL talk
Scheduling • Multiple Graphics Processing Units • View a GPU as a single accelerator as opposed to being composed of hundreds of streaming processors • Must explicitly transfer data from main memory to GPU • No hardware cache coherency provided • Hybrid Execution Model • Execute tasks on both CPU and GPU Intel MKL talk
Scheduling • Software Managed Cache Coherency • Use software caches developed for cache affinity to handle data transfers! • Allow blocks to be dirty on GPU until it is requested by another GPU • Apply any scheduling algorithm when utilizing GPUs, particularly cache affinity Intel MKL talk
Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk
Performance • CPU Target Architecture • 4 socket 2.66 GHz Intel Dunnington • 24 cores • Linux and Windows • 16 MB shared L3 cache per socket • OpenMP • Intel compiler 11.1 • BLAS • Intel MKL 10.2 Intel MKL talk
Performance • Implementations • SuperMatrix + serial MKL • FIFO queue, cache affinity • FLAME + multithreaded MKL • Multithreaded MKL • PLASMA + serial MKL • Double precision real floating point arithmetic • Tuned block size Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance • Inversion of a Symmetric Positive Definite Matrix • Cholesky factorization CHOL • Inversion of a triangular matrix TRINV • Triangular matrix multiplication by its transpose TTMM Intel MKL talk
Performance • Inversion of an SPD Matrix Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance • Generalized Eigenproblem where and is symmetric and is symmetric positive definite • Cholesky Factorization where is a lower triangular matrix so that Intel MKL talk
Performance then multiply the equation by • Standard Form where and • Reduction from Symmetric Definite Generalized Eigenproblem to Standard Form Intel MKL talk
Performance • Reduction from … Intel MKL talk
Performance Intel MKL talk
Performance • GPU Target Architecture • 2 socket 2.82 GHz Intel Harpertown with NVIDIA Tesla S1070 • 4 602 MHz Tesla C1060 GPUs • 4 GB DDR memory per GPU • Linux • CUDA • CUBLAS 3.0 • Single precision real floating point arithmetic Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance Intel MKL talk
Performance • Results • Cache affinity vs. FIFO queue • SuperMatrix out-of-order vs. PLASMA in-order • High variability of work stealing vs. predictable cache affinity performance • Strong scalability on CPU and GPU • Representative performance of other dense linear algebra operations Intel MKL talk
Outline 7 • Introduction • SuperMatrix • Scheduling • Performance • Conclusion 6 5 5 4 3 4 3 2 1 Intel MKL talk
Conclusion • Separation of Concerns • Allows us to experiment with different scheduling algorithms • Port runtime system to multiple GPUs • Locality, Locality, Locality • Data communication is important as load balance for scheduling matrix computations Intel MKL talk
Current Work • Intel Single-chip Cloud Computer • 48 cores on a single die • Cores communicate via message passing buffer • RCCE_send • RCCE_recv • Software managed cache coherency for off-chip shared memory • RCCE_shmalloc Intel MKL talk