1 / 38

Analysis of Broadcast and Reduction Algorithms in Network Architectures

Explore various broadcast and reduction algorithms in ring, mesh, and hypercube architectures. Understand one-to-all, all-to-one, all-to-all communication models with examples and challenges. Discover optimization potential in mesh networks.

charliev
Download Presentation

Analysis of Broadcast and Reduction Algorithms in Network Architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 684

  2. Algorithm Analysis Assumptions • Consider ring, mesh, and hypercube. • Each process can either send or receive a single message at a time. • No special communication hardware. • When discussing a mesh architecture we will consider a square toroidal mesh. • Latency is ts and Bandwidth is tw

  3. Basic Algorithms • Broadcast Algorithms • one to all (scatter) • all to one (gather) • all to all • Reduction • all to one • all to all

  4. Broadcast (ring) • Distribute a message of size m to all nodes. source

  5. Broadcast (ring) • Distribute a message of size m to all nodes. • Start the message both ways 4 3 2 source 1 4 3 2 T = (ts + twm)(p/2)

  6. Broadcast (mesh)

  7. Broadcast (mesh) Broadcast to source row using ring algorithm

  8. Broadcast (mesh) Broadcast to source row using ring algorithm Broadcast to the rest using ring algorithm from the source row

  9. Broadcast (mesh) Broadcast to source row using ring algorithm Broadcast to the rest using ring algorithm from the source row T = 2(ts + twm)(p1/2/2)

  10. Broadcast (hypercube)

  11. Broadcast (hypercube) 3 3 2 3 2 1 3 A message is sent along each dimension of the hypercube. Parallelism grows as a binary tree.

  12. Broadcast (hypercube) 3 3 T = (ts + twm)log2 p 2 3 2 1 3 A message is sent along each dimension of the hypercube. Parallelism grows as a binary tree.

  13. Broadcast • Mesh algorithm was based on embedding rings in the mesh. • Can we do better on the mesh? • Can we embed a tree in a mesh? • Exercise for the reader. (-: hint, hint ;-)

  14. Other Broadcasts • Many algorithms for all-to-one and all-to-all communication are simply reversals and duals of the one-to-all broadcast. • Examples • All-to-one • Reverse the algorithm and concatenate • All-to-all • Butterfly and concatenate

  15. Scatter Operation 1,2,3,4,5,6,7,8 • Often called one-to-all personalized communication. • Send a different message to each node. 8 6 4 2 5,6 7,8 5 7 5,6,7,8 3,4 3 1,2 1 1,2,3,4

  16. Reduction Algorithms • Reduce or combine a set of values on each processor to a single set. • Summation • Max/Min • Many reduction algorithms simply use the all-to-one broadcast algorithm. • Operation is performed at each node.

  17. Reduction • If the goal is to have only one processor with the answer, use broadcast algorithms. • If all must know, use butterfly. • Reduces algorithm from 2log p to log p

  18. 111 110 6 7 010 2 011 3 4 100 5 101 0 1 000 001 How'd they do that? • Broadcast and Reduction algorithms are based on Gray code numbering of nodes. • Consider a hypercube. Neighboring nodes differ by only one bit location.

  19. How'd they do that? • Start with most significant bit. • Flip the bit and send to that processor • Proceed with the next most significant bit • Continue until all bits have been used.

  20. Procedure SingleNodeAccum(d, my_id, m, X, sum) for j = 0 to m-1 sum[j] = X[j]; mask = 0 for i = 0 to d-1 if ((my_id AND mask) == 0) if ((my_id AND 2i) != 0 msg_dest = my_id XOR 2i send(sum, msg_dest) else msg_src = my_id XOR 2i recv(sum, msg_src) for j = 0 to m-1 sum[j] += X[j] endif endif mask = mask XOR 2i endfor end

  21. All-to-all personalized Comm. • What about when everybody needs to communicate something different to everybody else? • matrix transpose with row-wise partitioning • Issues • Everybody Scatter? • Bottlenecks?

  22. All-to-All Personalized Communication All-to-all personalized communication.

  23. All-to-All Personalized Communication: Example • Consider the problem of transposing a matrix. • Each processor contains one full row of the matrix. • The transpose operation in this case is identical to an all-to-all personalized communication operation.

  24. All-to-All Personalized Communication: Example All-to-all personalized communication in transposing a 4 x 4 matrix using four processes.

  25. All-to-All Personalized Communication on a Ring • Each node sends all pieces of data as one consolidated message of size m(p – 1) to one of its neighbors. • Each node extracts the information meant for it from the data received, and forwards the remaining (p – 2) pieces of size m each to the next node. • The algorithm terminates in p – 1 steps. • The size of the message reduces by m at each step.

  26. All-to-All Personalized Communication on a Ring All-to-all personalized communication on a six-node ring. The label of each message is of the form {x,y}, where x is the label of the node that originally owned the message, and y is the label of the node that is the final destination of the message. The label ({x1,y1}, {x2,y2},…, {xn,yn}, indicates a message that is formed by concatenating n individual messages.

  27. All-to-All Personalized Communication on a Ring: Cost • We have p – 1 steps in all. • In step i, the message size is m(p – i). • The total time is given by: • The tw term in this equation can be reduced by a factor of 2 by communicating messages in both directions.

  28. All-to-All Personalized Communication on a Mesh • Each node first groups its p messages according to the columns of their destination nodes. • All-to-all personalized communication is performed independently in each row with clustered messages of size m√p. • Messages in each node are sorted again, this time according to the rows of their destination nodes. • All-to-all personalized communication is performed independently in each column with clustered messages of size m√p.

  29. All-to-All Personalized Communication on a Mesh The distribution of messages at the beginning of each phase of all-to-all personalized communication on a 3 x 3 mesh. At the end of the second phase, node i has messages ({0,i},…,{8,i}), where 0 ≤ i ≤ 8. The groups of nodes communicating together in each phase are enclosed in dotted boundaries.

  30. All-to-All Personalized Communication on a Mesh: Cost • Time for the first phase is identical to that in a ring with √p processors, i.e., (ts + twmp/2)(√p – 1). • Time in the second phase is identical to the first phase. Therefore, total time is twice of this time, i.e., • It can be shown that the time for rearrangement is less much less than this communication time.

  31. All-to-All Personalized Communication on a Hypercube • Generalize the mesh algorithm to log p steps. • At any stage in all-to-all personalized communication, every node holds p packets of size m each. • While communicating in a particular dimension, every node sends p/2 of these packets (consolidated as one message). • A node must rearrange its messages locally before each of the log p communication steps.

  32. All-to-All Personalized Communication on a Hypercube An all-to-all personalized communication algorithm on a three-dimensional hypercube.

  33. All-to-All Personalized Communication on a Hypercube • We have log p iterations and mp/2 words are communicated in each iteration. Therefore, the cost is: • This is not optimal!

  34. All-to-All Personalized Communication on a Hypercube: Optimal Algorithm • Each node performs p – 1 communication steps, exchanging m words of data with a different node in every step. • A node must choose its communication partner in each step so that the hypercube links do not suffer congestion. • In the jth communication step, node i exchanges data with node (i XOR j). • In this schedule, all paths in every communication step are congestion-free, and none of the bidirectional links carry more than one message in the same direction.

  35. Seven steps in all-to-all personalized communication on an eight-node hypercube.

  36. All-to-All Personalized Communication on a Hypercube: Optimal Algorithm A procedure to perform all-to-all personalized communication on a d-dimensional hypercube. The message Mi,j initially resides on node i and is destined for node j.

  37. All-to-all personalized hypercube node step

  38. Basic Communication Algorithms • Many different ways • Goal: Perform communication in least time • Depends on architecture • Hypercube algorithms are often provably optimal • Even on fully connected architecture

More Related