450 likes | 758 Views
Parallel Computing in Matlab. PCT. Parallel Computing Toolbox O ffload work from one MATLAB session (the client) to other MATLAB sessions (the workers). Run as many as eight MATLAB workers (R2010b) on your local machine in addition to your MATLAB client session. . MDCS.
E N D
PCT • Parallel Computing Toolbox • Offload work from one MATLAB session (the client) to other MATLAB sessions (the workers). • Run as many as eight MATLAB workers (R2010b) on your local machine in addition to your MATLAB client session.
MDCS • MATLAB Distributed Computing Server • Run as many MATLAB workers on a remote cluster of computers as your licensing allows. • Run workers on your client machine if you want to run more than eight local workers (R2010b).
Typical Use Cases • Parallel for-Loops • Many iterations • Long iterations • Batch Jobs • Large Data Sets
Parfor • Parallel for-loop • Has the same basic concept with “for”. • Parfor body is executed on the MATLAB client and workers. • The necessary data on which parfor operates is sent from the client to workers, and the results are sent back to the client and pieced together. • MATLAB workers evaluate iterations in no particular order, and independently of each other.
Parfor A = zeros(1024, 1); for i = 1:1024 A(i) = sin(i*2*pi/1024); end plot(A) parallelization A = zeros(1024, 1); matlabpool open local 4 parfor i = 1:1024 A(i) = sin(i*2*pi/1024); end matlabpool close plot(A)
Timing A = zeros(n, 1); matlabpool open local 8 tic parfor i = 1:n A(i) = sin(i); end toc A = zeros(n, 1); tic for i = 1:n A(i) = sin(i); end toc
When to Use Parfor? • Each loop must be independent of other loops. • Lots of iterations of simple calculations. • or • Long iterations. • Small number of simple calculations.
Classification of Variables temporary variable loop variable reduction variable sliced input variable sliced output variable broadcast variable
More Notes d = 0; i = 0; parfor i = 1:4 b = i; d = i*2; A(i)= d; end d = 0; i = 0; for i = 1:4 b = i; d = i*2; A(i)= d; end
More Notes C = 0; for i = 1:m for j = i:n C = C + i * j; end end How to parallelize?
Parfor: Estimating an Integral function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; for i = 1:m x = x1 + u * i; for j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end end end
Parfor: Estimating an Integral • Computation complexity: O(m*n) • Each iteration is independent of other iterations. • We can replace “for” with “parfor”, for either loop index i or loop index j.
Parfor: Estimating an Integral function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; parfor i = 1:m x = x1 + u * i; for j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end end end tic A = quad_fun(m,n,0,3,0,3); toc
Parfor: Estimating an Integral function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; for i = 1:m x = x1 + u * i; parfor j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end end end tic A = quad_fun(m,n,0,3,0,3); toc
SPMD • SPMD: Single Program Multiple Data. • SPMD command is like a very simplified version of MPI. • The spmd statement lets you define a block of code to run simultaneously on multiple labs, each lab can have different, unique data for that code. • Labs can communicate directly via messages, they meet at synchronization points. • The client program can examine or modify data on any lab.
SPMD • MATLAB sets up the requested number of labs, each with a copy of the program. Each lab “knows" it's a lab, and has access to two special functions: • numlabs(), the number of labs; • labindex(), a unique identifier between 1 and numlabs().
Distributed Arrays • Distributed() • You can create a distributed array in the MATLAB client, and its data is stored on the labs of the open MATLAB pool. A distributed array is distributed in one dimension, along the last nonsingleton dimension, and as evenly as possible along that dimension among the labs. You cannot control the details of distribution when creating a distributed array.
Distributed Arrays • Codistributed() • You can create a codistributed array by executing on the labs themselves, either inside an spmd statement, in pmode, or inside a parallel job. When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions.
Distributed Arrays • Codistributed() • You can create a codistributed array by executing on the labs themselves, either inside an spmd statement, in pmode, or inside a parallel job. When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions.
Example: Trapezoid • To simplify things, we assume interval is [0, 1] , and we'll let each lab define a and b to mean the ends of its subinterval. If we have 4 labs, then lab number 3 will be assigned [ ½, ¾].
Pmode • pmode lets you work interactively with a parallel job running simultaneously on several labs. • Commands you type at the pmode prompt in the Parallel Command Window are executed on all labs at the same time. • Each lab executes the commands in its own workspace on its own variables.
Pmode • labindex()and numlabs() still work; • Variables only have the same name, they are independent of each other.
Pmode • Aggregate the array segments into a coherent array. codist = codistributor1d(2, [2 2 2 2], [3 8]) whole = codistributed.build(segment, codist)
Pmode • Aggregate the array segments into a coherent array. whole = whole + 1000 section = getLocalPart(whole)
Pmode • Aggregate the array segments into a coherent array combined = gather(whole)
Pmode • How to change distribution? distobj = codistributor1d() I = eye(6, distobj) getLocalPart(I) distobj = codistributor1d(1); I = redistribute(I, distobj) getLocalPart(I)
GPU Computing • Capabilities • Transferring data between the MATLAB workspace and the GPU • Evaluating built-in functions on the GPU • Running MATLAB code on the GPU • Creating kernels from PTX files for execution on the GPU • Choosing one of multiple GPU cards to use • Requirements • NVIDIA CUDA-enabled device with compute capability of 1.3 or greater • NVIDIA CUDA device driver 3.1 or greater • NVIDIA CUDA Toolkit 3.1 (recommended) for compiling PTX files
GPU Computing • Transferring data between workspace and GPU • Creating GPU data N = 6; M = magic(N); G = gpuArray(M); M2 = gather(G);
GPU Computing • Executing code on the GPU • You can transfer or create data on the GPU, and use the resulting GPUArray as input to enhanced built-in functions that support them. • You can run your own MATLAB function file on a GPU. • If any of arg1 and arg2 is a GPUArray, the function executes on the GPU and return a GPUArray • If none of the input arguments is GPUArray, then arrayfun executes in CPU. • Only element-wise operations are supported. • result = arrayfun(@myFunction, arg1, arg2);
Review • What is the typical use cases of parallel Matlab? • When to use parfor? • What’s the difference between worker(parfor) and lab(spmd)? • What’s the difference between spmd and pmode? • How to build distributed array? • How to use GPU for Matlab parallel computing?