1 / 18

Dependence Analysis and Dependence Graph (Chapter 9)

Dependence Analysis and Dependence Graph (Chapter 9). Objectives. To perform loop optimizations, parallelization, instruction scheduling , data cache optimization through dependence analysis. To identify regular and repetitive control flow patterns. Dependence Relations.

pchang
Download Presentation

Dependence Analysis and Dependence Graph (Chapter 9)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dependence Analysis and Dependence Graph(Chapter 9)

  2. Objectives • To perform loop optimizations, parallelization, instruction scheduling , data cache optimization through dependence analysis. • To identify regular and repetitive control flow patterns.

  3. Dependence Relations • Dependance analysis can be used in instruction analysis and data cache-related analysis. • We may construct a graph called the dependence graph that represents the dependences present in a code fragment- in the case of instruction scheduling as applied to a basic block, the graph has no cycles in it, and it is called DAG-Directed Acyclic Graph

  4. If stmt S1 precedes S2 in their given execution order, we write S1 S2. • A dependence between two statements in a program is a relation that constrains their execution order. • A control dependence is a constraint that arises from the control flow of the program, such as S2’s relationship to S3 and S4. • S3 and S4 are executed only if the condition in S2 is not satisfied.

  5. If there is a control dependence between stmt S1 and S2, we write S1 δc S2 • S2 δc S3 andS2 δc S4 in the following code:

  6. Data Dependence • It is a constraint that arises from the flow of data between stmts, • such as between S3 and S4 - S3 sets the value of d and S4 uses it, • also S3 uses the value of e and S4 sets it. In both cases, reordering the stmts could result in the code’s producing incorrect results.

  7. There are four variety of data dependencies as follows: True dependencies or Flow dependencies Anti Dependencies Output Dependencies Input Dependencies

  8. - output dependency S1: A = B + C S2: A = D + E -input dependency S1: A = B + C S2: D = B + E - flow dependency S1: A = B + C S2: D = A + E -anti-dependency S1: A = B + C S2: B = D + E

  9. Basic Block Dependence DAGs • This method is also called list scheduling. • It requires that we begin by constructing a dependence graph that represents the constraints on the possible schedules of the instructions in a block. • Basic block has no loops within it, its dependence graph is always a DAG known as the dependence DAG for the block.

  10. The nodes in a dependence DAG represent machine instruction or low-level intermediate-code instructions and its edges represent dependences between the instructions. • An edge from I1 to I2 may represent any of several kinds of dependences, it may be that

  11. In the case of load followed by a store that uses different registers to address memory and for which we cannot determine whether the addressed locations might overlap • Ex. Suppose an instruction reads from [r11](4) and the next writes to [r2+12](4). Unless we know that r2 + 12 and r11 point to different locations, we must assume that there is a dependence between the two instructions.

  12. Dependencies in Loops • In studying data-cache optimization, our concern is almost entirely with data dependence, not control dependence: • We consider uses of subscripted variables in perfectly nested loops in HIR. • Each loop index runs from 1 to some value n by 1s and only innermost loop has statements other than for statements within it.

  13. The iterative space of the loop nest in above fig is the k-dimensional polyhedron consisting of all the k-tuples of values of the loop indexes called index vectors. Ex. [1…n1] X [1…n2] X….X[1..nk]

More Related