180 likes | 195 Views
Dependence Analysis and Dependence Graph (Chapter 9). Objectives. To perform loop optimizations, parallelization, instruction scheduling , data cache optimization through dependence analysis. To identify regular and repetitive control flow patterns. Dependence Relations.
E N D
Objectives • To perform loop optimizations, parallelization, instruction scheduling , data cache optimization through dependence analysis. • To identify regular and repetitive control flow patterns.
Dependence Relations • Dependance analysis can be used in instruction analysis and data cache-related analysis. • We may construct a graph called the dependence graph that represents the dependences present in a code fragment- in the case of instruction scheduling as applied to a basic block, the graph has no cycles in it, and it is called DAG-Directed Acyclic Graph
If stmt S1 precedes S2 in their given execution order, we write S1 S2. • A dependence between two statements in a program is a relation that constrains their execution order. • A control dependence is a constraint that arises from the control flow of the program, such as S2’s relationship to S3 and S4. • S3 and S4 are executed only if the condition in S2 is not satisfied.
If there is a control dependence between stmt S1 and S2, we write S1 δc S2 • S2 δc S3 andS2 δc S4 in the following code:
Data Dependence • It is a constraint that arises from the flow of data between stmts, • such as between S3 and S4 - S3 sets the value of d and S4 uses it, • also S3 uses the value of e and S4 sets it. In both cases, reordering the stmts could result in the code’s producing incorrect results.
There are four variety of data dependencies as follows: True dependencies or Flow dependencies Anti Dependencies Output Dependencies Input Dependencies
- output dependency S1: A = B + C S2: A = D + E -input dependency S1: A = B + C S2: D = B + E - flow dependency S1: A = B + C S2: D = A + E -anti-dependency S1: A = B + C S2: B = D + E
Basic Block Dependence DAGs • This method is also called list scheduling. • It requires that we begin by constructing a dependence graph that represents the constraints on the possible schedules of the instructions in a block. • Basic block has no loops within it, its dependence graph is always a DAG known as the dependence DAG for the block.
The nodes in a dependence DAG represent machine instruction or low-level intermediate-code instructions and its edges represent dependences between the instructions. • An edge from I1 to I2 may represent any of several kinds of dependences, it may be that
In the case of load followed by a store that uses different registers to address memory and for which we cannot determine whether the addressed locations might overlap • Ex. Suppose an instruction reads from [r11](4) and the next writes to [r2+12](4). Unless we know that r2 + 12 and r11 point to different locations, we must assume that there is a dependence between the two instructions.
Dependencies in Loops • In studying data-cache optimization, our concern is almost entirely with data dependence, not control dependence: • We consider uses of subscripted variables in perfectly nested loops in HIR. • Each loop index runs from 1 to some value n by 1s and only innermost loop has statements other than for statements within it.
The iterative space of the loop nest in above fig is the k-dimensional polyhedron consisting of all the k-tuples of values of the loop indexes called index vectors. Ex. [1…n1] X [1…n2] X….X[1..nk]