220 likes | 339 Views
Matrix Multiply with Dryad. B649 Course Project Introduction. Matrix Multiplication. Fundamental kernel algorithm used by many applications Examples: Graph Theory, Physics, Electronics. Scalability Issues:. Run on single machine: Memory overhead increase in terms of N^2
E N D
Matrix Multiply with Dryad B649 Course Project Introduction
Matrix Multiplication • Fundamental kernel algorithm used by many applications • Examples: Graph Theory, Physics, Electronics
Scalability Issues: • Run on single machine: • Memory overhead increase in terms of N^2 • CPU overhead increase in terms of N^3 • Run on multiple machines: • Communication overhead increase in terms of N^2
Why DryadLINQ? • Dryad is a general purpose runtime that supports the processing of data intensive application in Windows • DryadLINQ is a high level programming language and compiler for Dryad • Applicability: • Dryad transparently deal with the parallelism, scheduling, fault. tolerance, messaging, and workload balancing issues. • SQL-like interface, based on .NET platform, easy to have code. • Performance: • Intelligent job execution engine, optimized execution plan. • Scale out for thousands of machines.
Parallel Algorithms for Matrix Multiplication • MM algorithms can deal with matrices distributed on rectangular grids • No single algorithm always achieves best performance on different matrix and grid shapes. • MM Algorithms can be classified into categories according to the communication primitives • Row Partition • Row Column Partition • Fox Algorithm (BMR) – broadcast, multiply, roll up
Row Partition • Heavy communication overhead • Large memory usage per node • The full Matrix B is copied to every node • The Matrix A row blocks are distributed to each node Pseudo Code sample: Partition matrix A by rows Broadcast matrix B Distributed matrix A row blocks Compute the matrix C row blocks
Row Column Partition • Heavy communication overhead • Scheduling overhead for each iteration • Moderate memory usage Pseudo Code sample: Partitioned matrix A by rows Partitioned matrix B by columns For each iteration i: broadcast matrix A row block i distributed matrix B column blocks compute matrix Crow blocks
Fox Algorithm Stage One Stage Two
Fox algorithm • Less communication overhead than other approach • Scale well for large matrices sizes Pseudo Code sample: Partitioned matrix A, B to blocks For each iteration i: 1) broadcast matrix A block (i%N, i%N) to row i 2) compute matrix C blocks add the to the previous result 3) roll-up matrix B block
Performance Analysis on Fox algorithm Cache Size Turning Point • Cache Issue • Cache miss (size), pollution, confliction • Tiles matrix multiply • Memory Issue • Size (memory paging) • Bandwidth, latency • Absolute performance degrade as problem size increase for both cases • Single node performance worse than multiple nodes due to memory issue.
Multicore level parallelism • To use every core on a compute node for Dryad Job, the task must be programmed with multicore technology. (i.e. Task Parallel Library<TPL>, Thread, PLINQ) • For each thread, it will compute one row in matrix C or several rows in matrix C depends on the implementation. • By using TPL or PLINQ, the optimization for threads is implicit and easier to use.
Timeline for term long project • Stage One • Familiar with HPC cluster • Sequential MM with C# • Multithreaded MM with C# • Performance comparison of above two approaches • Stage Two • Familiar with DryadLINQ Interface • Implement Row Partition algorithm with DryadLINQ • Performance study • Stage Three • Refinement experiments results • Report and presentation
Dryad Job Submission Client machine DryadLINQ HPC Cluster .NET program Distributedquery plan Query Vertexcode Input Tables Query Expr Invoke ToTable JM Dryad Execution Output DryadTable .Net Objects Results foreach (11) Output Tables • Input: C# and LINQ data objects DryadLINQ distributed data objects. • DryadLINQ translates LINQ programs into distributed Dryad computations: • C# methods become code running on the vertices of a Dryad job. • Output: DryadLINQ distributed data objects .Net objects
Performance for three algorithms • Test done on 16 nodes of Tempest, using one core per node.
Performance for Multithreaded MM • Test done on one node of Tempest, 24 cores