1 / 44

Partial Redundancy Elimination

Partial Redundancy Elimination. Partial-Redundancy Elimination. Minimize the number of expression evaluations

bgrahm
Download Presentation

Partial Redundancy Elimination

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Partial Redundancy Elimination

  2. Partial-Redundancy Elimination • Minimize the number of expression evaluations • By moving around the places where an expression is evaluated and keeping the result in a temporary variable when necessary, we often can reduce the number of evaluations of this expression along many of the execution paths, while not increasing that number along any path

  3. The Sources of Redundancy • Common subexpressions • Loop-invariant expressions • Partially redundant expressions

  4. Common Subexpressions a = b + c b = 7 d = b + c t = b + c a = t b = 7 t = b + c d = t e = b + c e = t

  5. Loop-Invariant Expressions t = b + c a = t a = b + c

  6. Partially Redundant Expressions a = b + c t = b + c a = t t = b + c d = b + c d = t

  7. Can All Redundancy Be Eliminated? • It is not possible to eliminate all redundant computations along every path, unless we are allowed to change the control flow graph by creating new blocks and duplicating blocks • A critical edge is an edge leading from a node with more than one successor to a node with more than one predecessor • Block duplication can be used to isolate the path where redundancy is found

  8. t = b + c a = t … t = b + c d = t Block Creation a = b + c … d = b + c

  9. Block Duplication B1 B1 a = b + c t = a a = b + c B2 B3 B2 B3 B’4 B4 B4 B’6 B5 d = b + c B6 d = t d = b + c B5 B6 B7 B7

  10. The Lazy-Code-Motion Problem • Three properties desirable from the partial redundancy elimination algorithm • All redundant computations of expressions that can be eliminated without block duplication are eliminated • The optimized program does not perform any computation that is not in the original program execution • Expressions are computed at the latest possible time

  11. Lazy Code Motion • The values of expressions found to be redundant are usually held in registers until they are used • Computing a value as late as possible minimizes its lifetime the duration between the time the value is defined and the time it is last used • Minimizing the lifetime of a value in turn minimizes the usage of a register

  12. Full Redundancy • An expression e in block B is redundant if along all paths reaching B, e has been evaluated and the operands of e have not been redefined subsequently • Let S be the set of blocks, each containing expression e, that renders e in B redundant • The set of edges leaving the blocks in S must necessarily form a cutset, which if removed, disconnects B from the entry of the program • No operands of e are redefined along the paths that lead from the blocks in S to B

  13. Partial Redundancy • If an expression e in block B is only partialredundant, the lazy code motion algorithm attempts to render efully redundant in B by placing additional copies of the expressions in the flow graph • If the attempt is successful, the optimized flow graph will also have a set of blocks S, each containing e, and whose outgoing edges are a cutset between the entry and B

  14. Anticipation of Expressions • To ensure that no extra operations are executed, copies of an expression must be placed only at program points where the expression is anticipated (very busy) • An expression e is anticipated at point p if all paths leading from p eventually compute the value of e using the values of operands that are available at p

  15. Earliest Places of Expressions • Consider an acyclic path B1  B2 …  Bn • Suppose e is evaluated only in blocks B1 and Bn, and the operands of e are not redefined in blocks along the path. There are incoming edges that join the path and there are outgoing edges that exit the path • Expression e is not anticipated at the entry of Bi iff there exists an outgoing edge leaving Bj, i  j < n, that leads to an execution path that does not use the value of e

  16. Earliest Placement of Expressions • Anticipation limits how early an expression can be placed B1  …  Bj …  Bn …

  17. Earliest Placement of Expressions • We can create a cutset that includes the edge Bi-1 Bi and that renders e redundant in Bn if e is either available or anticipated at the entry of Bi • If e is anticipated but not available at the entry of Bi, we must place a copy of e along the incoming edge

  18. Earliest Placement of Expressions • The introduced expressions may themselves be partially redundant with other instances of the same expression • Such partial redundancy may be eliminated by moving these computations further up

  19. An Example B1 B2 a = b + c * B3 B4 B5 * d = b + c B6 e = b + c B7

  20. Latest Placement of Expressions • We have a choice of where to place the copies of e, since there are usually several cutsets in the flow graph that satisfy all the requirements • Computation is introduced along the incoming edges to the path so the expression is computed as close to the use as possible without introducing redundancy

  21. Summary • Anticipation limits how early an expression can be placed • The earlier an expression is placed, the more redundancy can be removed • Among all solutions that eliminate the same redundancies, the one that computes the expressions the latest minimizes the lifetime of the register holding the values of the expressions involved

  22. The Lazy-Code-Motion Algorithm • Find all the expressions anticipated at each program point using a backward analysis • Find all the expressions available at each program point using a forward analysis. • Find all the expressions postponable at each program point using a forward analysis • Find all the expressions used at each program point using a backward analysis

  23. Preprocessing • Assume that initially every statement is in a basic block of its own, and we only introduce new computations of expressions at the beginning of blocks • To ensure that this simplification does not reduce the effectiveness of the technique, we insert a new block between the source and the destination of an edge if the destination has more than one predecessor • This also takes care of all critical edges

  24. e-use and e-kill Sets • e-useB is the set of expressions computed in B • e-killB is the set of expressions any of whose operands are defined in B

  25. The Running Example B1 B8 c = 2 a = b + c B2 B5 B9 e = b + c B3 B6 B10 B4 B7 d = b + c B11

  26. Anticipated Expressions Direction Backwards Transfer fB(x) = e_useB (x – e_killB) function Boundary IN[EXIT] =  Meet()  Equations IN[B] = fB(OUT[B]) OUT[B] = s, succ(B) IN[S] Initialization IN[B] = U

  27. Anticipated Expressions B1 B8 c = 2 a = b + c B2 B5 B9 e = b + c B3 B6 B10 B4 B7 d = b + c B11

  28. Available Expressions Direction Forwards Transfer fB(x) = (anticipated[B].in x) – e_killB function Boundary OUT[ENTRY] =  Meet()  Equations OUT[B] = fB(IN[B]) IN[B] = P, pred(B) OUT[P] Initialization OUT[B] = U

  29. Available Expressions B1 B8 c = 2 a = b + c B2 B5 B9 e = b + c B3 B6 B10 B4 B7 d = b + c B11

  30. Available Expressions B1 B2 a = b + c B3 B4 d = b + c

  31. Earliest Placement • With the earliest placement strategy, the set of expressions placed at block B, i.e., earliest[B], is defined as the set of anticipated expressions that are not yet availableearliest[B] = anticipated[B].in – available[B].in

  32. Earliest Placement B1 B8 c = 2 a = b + c B2 B5  B9 e = b + c B3  B6 B10 B4 B7 d = b + c B11

  33. Postponable Expressions B1 c = 2 a = b + c B2 B5 B3 B6 B4 B7 d = b + c

  34. Postponable Expressions Direction Forwards Transfer fB(x) = (earliest[B] x) – e_useB function Boundary OUT[ENTRY] =  Meet()  Equations OUT[B] = fB(IN[B]) IN[B] = P, pred(B) OUT[P] Initialization OUT[B] = U

  35. Postponable Expressions B1 B8 c = 2 a = b + c B2 B5 B9 e = b + c B3 B6 B10 B4 B7 d = b + c B11

  36. Latest Placement • Expression e may be placed at the beginning of a block B only if e is in B’s earliest or postponable set upon entry • In addition, B is in the postponement frontier of e if one of the following holds: • e is in e_useB • e cannot be postponed to one of its successors, i.e., e is not in the earliest or postponable set upon entry to that successor

  37. Latest Placement latest[B] = (earliest[B]  postponable[B].in)  (e_useB  (S,succ(B)(earliest[S]  postponable[S].in)))

  38. Latest Placement B1 B8 c = 2 a = b + c B2 B5  B9 e = b + c B3 B6 B10 B4  B7 d = b + c B11

  39. Used Expressions Direction Backwards Transfer fB(x) = (e_useB x) – latest[B] function Boundary IN[EXIT] =  Meet()  Equations IN[B] = fB(OUT[B]) OUT[B] = s, succ(B) IN[S] Initialization IN[B] = 

  40. Used Expressions B1 B8 c = 2 a = b + c B2 B5  B9 e = b + c B3 B6 B10 B4  B7 d = b + c B11

  41. Transformation • For each expression, say x + y, computed by the program, do the following: • Create a new temporary, say t, for x + y • For all blocks B such that x + y is in latest[B]  used[B].out, add t = x + y at the beginning of B • For all blocks B such that x + y is in e_useB (latest[B] used[B].out), replace every original x + y by t

  42. Transformation a = b + c B1 b = B2 d = b + c B3 e = b + c B4

  43. Transformation B1 B8 c = 2 t = b + c a= b + c B2 B5  B9 e = b + c B3 B6 B10 B4 t = b + c  B7 d = b + c B11

  44. Transformation B1 B8 c = 2 t = b + c a= t B2 B5  B9 e = t B3 B6 B10 B4 t = b + c  B7 d = t B11

More Related