1 / 61

CENG 450 Computer Systems and Architecture Lecture 8

Understand ILP principles, issues, and solutions. Learn about in-order and out-of-order execution. Dive into static and dynamic scheduling. Discover pipeline scheduling benefits and techniques. Explore loop unrolling for enhancing performance.

jonmartinez
Download Presentation

CENG 450 Computer Systems and Architecture Lecture 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CENG 450Computer Systems and ArchitectureLecture 8 Amirali Baniasadi amirali@ece.uvic.ca

  2. This Lecture • ILP • Scheduling

  3. What Is an ILP? • Principle: Many instructions in the code do not depend on each other • Result: Possible to execute them in parallel • ILP: Potential overlap among instructions (so they can be evaluated in parallel) • Issues: • Building compilers to analyze the code • Building special/smarter hardware to handle the code • ILP: Increase the amount of parallelism exploited among instructions • Seeks Good Results out of Pipelining

  4. What Is ILP? • CODE A: CODE B: • LD R1, (R2)100 LD R1,(R2)100 • ADD R4, R1 ADD R4,R1 • SUB R5,R1 SUB R5,R4 • CMP R1,R2 SW R5,(R2)100 • ADD R3,R1 LD R1,(R2)100 • Code A: Possible to execute 4 instructions in parallel. • Code B: Can’t execute more than one instruction per cycle. Code A has Higher ILP

  5. Out of Order Execution In-Order A B C A: LD R1, (R2) B: ADD R3, R4 C: ADD R3, R5 D: CMP R3, R1 D Programmer: Instructions execute in-order Processor: Instructions may execute in any order if results remain the same at the end Out-of-Order B: ADD R3, R4 C: ADD R3, R5 A: LD R1, (R2) D: CMP R3, R1

  6. Scheduling • Scheduling: re-arranging instructions to maximize performance • Requires knowledge about structure of processor • Static Scheduling: done by compiler • Example • for (i=1000; i>0; i--) x[i] = x[i] + s; • Dynamic Scheduling : done by hardware • Dominates Server and Desktop markets (Pentium III, 4; MIPS R10000/12000, UltraSPARC III, PowerPC 603 etc)

  7. Pipeline Scheduling: • Compiler schedules (move) instructions to reduce stall • Ex: code sequence a = b + c, d = e – f Before scheduling lw Rb, b lw Rc, c Add Ra, Rb, Rc //stall sw a, Ra lw Re, e lw Rf, f sub Rd, Re, Rf //stall sw d, Rd After scheduling lw Rb, b lw Rc, c lw Re, e Add Ra, Rb, Rc lw Rf, f sw a, Ra sub Rd, Re, Rf sw d, Rd Schedule

  8. Basic Pipeline Scheduling • To avoid pipeline stall: • A dependant instruction must be separated from the source instruction by a distance in clock cycles equal to the pipeline latency • Compiler’s ability depends on: • Amount of ILP available in the program • Latencies of the functional units in the pipeline • Pipeline CPI = Ideal pipeline CPI + Structured stalls + Data hazards stalls + Control stalls

  9. Pipeline Scheduling & Loop Unrolling • Basic Block • Set of instructions between entry points and between branches. A basic block has only one entry and one exit • Typically ~ 4 to 7 instructions • Amount of overlap << 4 to 7 instructions • Obtain substantial performance enhancements: Exploit ILP across multiple basic blocks • Loop Level Parallelism • Parallelism that exists within a loop: Limited opportunity • Parallelism can cross loop iterations! • Techniques to convert loop-level parallelism to instructional-level parallelism • Loop Unrolling: Compiler or the hardware’s ability to exploit the parallelism inherent in the loop

  10. Assumptions • Source instruction • Dependant instruction • Latency (clock cycles) • FP ALU op • Another FP ALU op • 3 • FP ALU op • Store double • 2 • Load double • FP ALU op • 1 • Load double • Store double • 0 • Five-stage integer pipeline • Branches have delay of one clock cycle • ID stage: Comparisons done, decisions made and PC loaded • No structural hazards • Functional units are fully pipelined or replicated (as many times as the pipeline depth) • FP Latencies Integer load latency: 1; Integer ALU operation latency: 0

  11. Simple Loop & Assembler Equivalent • x[i] & s are double/floating point type • R1 initially address of array element with the highest address • F2 contains the scalar value s • Register R2 is pre-computed so that 8(R2) is the last element to operate on • for (i=1000; i>0; i--) x[i] = x[i] + s; • Loop: LD F0, 0(R1) ;F0=array element • ADDD F4, F0, F2 ;add scalar in F2 • SD F4 , 0(R1) ;store result • SUBI R1, R1, #8 ;decrement pointer 8bytes (DW) • BNE R1, R2, Loop ;branch R1!=R2

  12. Unscheduled Loop: LD F0, 0(R1) stall ADDD F4, F0, F2 stall stall SD F4, 0(R1) SUBI R1, R1, #8 stall BNE R1, R2, Loop stall 10 clock cycles Can we minimize? Scheduled Loop: LD F0, 0(R1) SUBI R1, R1, #8 ADDD F4, F0, F2 stall BNE R1, R2, Loop SD F4, 8(R1) 6 clock cycles 3 cycles: actual work; 3 cycles: overhead Can we minimize further? Where are the stalls? Schedule • Source instruction • Dependant instruction • Latency (clock cycles) • FP ALU op • Another FP ALU op • 3 • FP ALU op • Store double • 2 • Load double • FP ALU op • 1 • Load double • Store double • 0

  13. LD F0, 0(R1) ADDD F4, F0, F2 SD F4 , 0(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, 0(R1) ADDD F4, F0, F2 SD F4 , 0(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, 0(R1) ADDD F4, F0, F2 SD F4 , 0(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, 0(R1) ADDD F4, F0, F2 SD F4 , 0(R1) SUBI R1, R1, #8 BNE R1, R2, Loop Loop: LD F0, 0(R1) ADDD F4, F0, F2 SD F4, 0(R1) LD F6, -8(R1) ADDD F8, F6, F2 SD F8, -8(R1) LD F10, -16(R1) ADDD F12, F10, F2 SD F12, -16(R1) LD F14, -24(R1) ADDD F16, F14, F2 SD F16, -24(R1) SUBI R1, R1, #32 BNE R1, R2, Loop Loop Unrolling Eliminate Incr, Branch LD F0, 0(R1) ADDD F4, F0, F2 SD F4 , 0(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, -8(R1) ADDD F4, F0, F2 SD F4 , -8(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, -16(R1) ADDD F4, F0, F2 SD F4 , -16(R1) SUBI R1, R1, #8 BNE R1, R2, Loop LD F0, -24(R1) ADDD F4, F0, F2 SD F4 , -24(R1) SUBI R1, R1, #32 BNE R1, R2, Loop Four copies of loop Four iteration code Assumption: R1 is initially a multiple of 32 or number of loop iterations is a multiple of 4

  14. Loop: LD F0, 0(R1) stall ADDD F4, F0, F2 stall stall SD F4, 0(R1) LD F6, -8(R1) stall ADDD F8, F6, F2 stall stall SD F8, -8(R1) LD F10, -16(R1) stall ADDD F12, F10, F2 stall stall SD F12, -16(R1) LD F14, -24(R1) stall ADDD F16, F14, F2 stall stall SD F16, -24(R1) SUBI R1, R1, #32 stall BNE R1, R2, Loop stall Loop Unroll & Schedule Loop: LD F0, 0(R1) LD F6, -8(R1) LD F10, -16(R1) LD F14, -24(R1) ADDD F4, F0, F2 ADDD F8, F6, F2 ADDD F12, F10, F2 ADDD F16, F14, F2 SD F4, 0(R1) SD F8, -8(R1) SD F12, -16(R1) SUBI R1, R1, #32 BNE R1, R2, Loop SD F16, 8(R1) Schedule No stalls! 14 clock cycles or 3.5 per iteration Can we minimize further? 28 clock cycles or 7 per iteration Can we minimize further?

  15. Summary Iteration 10 cycles Unrolling 7 cycles Scheduling Scheduling 6 cycles 3.5 cycles (No stalls)

  16. Limits to Gains of Loop Unrolling • Decreasing benefit • A decrease in the amount of overhead amortized with each unroll • Example just considered: • Unrolled loop 4 times, no stall cycles, in 14 cycles 2 were loop overhead • If unrolled 8 times, the overhead is reduced from ½ cycle per iteration to 1/4 • Code size limitations • Memory is premium • Larger size causes cache hit rate changes • Shortfall in registers (Register pressure) – Increasing ILP leads to increase in number of live values: May not be possible to allocate all the live values to registers • Compiler limitations: Significant increase in complexity

  17. What if upper bound of the loop is unknown? • Suppose • Upper bound of the loop is n • Unroll the loop to make k copies of the body • Solution: Generate pair of consecutive loops • First loop: body same as original loop, execute (n mod k) times • Second loop: unrolled body (k copies of original), iterate (n/k) times • For large values of n, most of the execution time is spent in the unrolled loop body

  18. Summary: Tricks of High Performance Processors • Out-of-order scheduling: To tolerate RAW hazard latency • Determine that the loads and stores can be exchanged as loads and stores from different iterations are independent • This requires analyzing the memory addresses and finding that they do not refer to the same address • Find that it was ok to move the SD after the SUBI and BNE, and adjust the SD offset • Loop unrolling: Increase scheduling scope for more latency tolerance • Find that loop unrolling is useful by finding that loop iterations are independent, except for the loop maintenance code • Eliminate extra tests and branches and adjust the loop maintenance code • Register renaming: Remove WAR/WAS violations due to scheduling • Use different registers to avoid unnecessary constraints that would be forced by using same registers for different computations • Summary: Schedule the code preserving any dependences needed

  19. Data Dependence • Data dependence • Indicates the possibility of a hazard • Determines the order in which results must be calculated • Sets upper bound on how much parallelism can be exploited • But, actual hazard & length of any stall is determined by pipeline • Dependence avoidance • Maintain the dependence but avoid hazard: Scheduling • Eliminate dependence by transforming the code

  20. Data Dependencies • 1 Loop: LD F0, 0(R1) • 2 ADDD F4, F0, F2 • 3 SUBI R1, R1, 8 • 4 BNE R1, R2, Loop ;delayed branch • 5 SD F4, 8(R1) ;altered when move past SUBI

  21. Name Dependencies • Two instructions use same name (register or memory location) but don’t exchange data • Anti-dependence (WAR if a hazard for HW) • Instruction j writes a register or memory location that instruction i reads from and instruction i is executed first • Output dependence (WAW if a hazard for HW) • Instruction i and instruction j write the same register or memory location; ordering between instructions must be preserved • How to remove name dependencies? • They are not true dependencies

  22. Register Renaming WAW WAR 1 Loop: LD F0, 0(R1) 2 ADDD F4, F0, F2 3 SD F4, 0(R1) 4 LD F0, -8(R1) 5 ADDD F4, F0, F2 6 SD F4, -8(R1) 7 LD F0, -16(R1) 8 ADDD F4, F0, F2 9 SD F4, -16(R1) 10 LD F0, -24(R1) 11 ADDD F4,F0,F2 12 SD F4, -24(R1) 13 SUBI R1, R1, #32 14 BNE R1, R2, LOOP 1 Loop: LD F0, 0(R1) 2 ADDD F4, F0, F2 3 SD F4, 0(R1) 4 LD F6,-8(R1) 5 ADDD F8, F6, F2 6 SD F8, -8(R1) 7 LD F10,-16(R1) 8 ADDD F12, F10, F2 9 SD F12, -16(R1) 10 LD F14, -24(R1) 11 ADDD F16, F14,F2 12 SD F16, -24(R1) 13 SUBI R1, R1, #32 14 BNE R1, R2, LOOP • Name Dependencies are Hard for Memory Accesses • Does 100(R4) = 20(R6)? • From different loop iterations, does 20(R6) = 20(R6)? • Our example required compiler to know that if R1 doesn’t change then:0(R1) ≠ -8(R1) ≠ -16(R1) ≠ -24(R1)There were no dependencies between some loads and stores so they could be moved around No data is passed in F0, but can’t reuse F0 in cycle 4.

  23. Control Dependencies • Example • if p1 {S1;}; • if p2 {S2;}; • S1 is control dependent on p1; S2 is control dependent on p2 but not on p1 • Two constraints • An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. • An instruction that is not control dependent on a branch cannot be moved to after the branch so that its execution is controlled by the branch. • Control dependencies relaxed to get parallelism-Loop unrolling

  24. Dynamic Scheduling • Dynamic Scheduling: Hardware rearranges the order of instruction execution to reduce stalls • Disadvantages • Hardware much more complex • Key idea • Instructions execution in parallel (use available all executing units) • Allow instructions behind stall to proceed • Example • DIVD F0,F2,F4 • ADDD F10,F0,F8 • SUBD F12,F8,F14 • Out-of-order execution => out-of-order completion

  25. Overview • In-order pipeline • 5 interlocked stages: IF, ID, EX, MEM, WB • Structural hazard: maximum of 1 instruction per stage • Unless stage is replicated (FP & integer EX) or “idle” (WB for stores) • Out-of-order pipeline • How does one instruction pass another without “killing”it? • Remember: only one instruction per-stage per-cycle • Must “buffer” instructions IF ID EX MEM WB

  26. Instruction Buffer • Trick: instruction buffer (many names for this buffer) • Accumulate decoded instructions in buffer • Buffer sends instructions down rest of pipe out-of-order instruction buffer IF ID1 ID2 EX MEM WB

  27. Scoreboard State/Steps instruction buffer • Confusion in community about which is which stage IF IS RO EX WB ID Structure Data Bus Registers EX EX EX Control/Status Scoreboard

  28. Dynamic Scheduling: Scoreboard • Out-of-order execution divides ID stage: • 1.Issue—decode instructions, check for structural hazards • 2. Read Operands—wait until no data hazards, then read operands • Scoreboards allow instruction to execute whenever 1 & 2 hold, not waiting for prior instructions. • A scoreboard is a “data structure” that provides the information necessary for all pieces of the processor to work together. • Centralized control scheme • No bypassing • No elimination of WAR/WAW hazards • We will use In order issue, out of order execution, out of order commit ( also called completion) • First used in CDC6600.

  29. Stages of Scoreboard Control • Issue—decode instructions & check for structural hazards (ID1) • If a functional unit for the instruction is free and no other active instruction has the same destination register (WAW), the scoreboard issues the instruction to the functional unit and updates its internal data structure. • If a structural or WAW hazard exists, then the instruction issue stalls, and no further instructions will issue until these hazards are cleared.

  30. Stages of Scoreboard Control • Read Operands—wait until no data hazards, then read operands from registers (ID2) • A source operand is available if no earlier issued active instruction is going to write it, or if the register containing the operand is being written by a currently active functional unit. • When the source operands are available, the scoreboard tells the functional unit to proceed to read the operands from the registers and begin execution. • The scoreboard resolves RAW hazards dynamically in this step, and instructions may be sent into execution out of order.

  31. Stages of Scoreboard Control • Execution—operate on operands (EX) • The functional unit begins execution upon receiving operands. When the result is ready, it notifies the scoreboard that it has completed execution. • Write result—finish execution (WB) • Once the scoreboard is aware that the functional unit has completed execution, the scoreboard checks for WAR hazards. If none, it writes results. If WAR, then it stalls the instruction. • Example: • DIVD F0, F2, F4 • ADDD F10, F0, F8 • SUBD F8, F8, F14 • Scoreboard would stall SUBD until ADDD reads operands

  32. Scoreboard Data Structures • Instruction status • Which of 4 steps the instruction is in • Functional unit status • Busy Whether the unit is busy or not • Op Operation to perform in the unit (e.g., + or –) • Fi Destination register • Fj, Fk Source-register numbers • Qj, Qk Functional units producing source registers Fj, Fk • Rj, Rk ready bits for Fj, Fk • Register result status • Indicates which functional unit (if any) will write each register. • Blank when no pending instructions will write that register

  33. Scoreboard Example LD F6, 34(R2) LD F2, 45(R3) MULT F0, F2, F4 SUBD F8, F6, F2 DIVD F10, F0, F6 ADDD F6, F8, F2 What are the hazards in this code? Latencies (clock cycles) LD 1 MULT 10 DIVD 40 ADDD, SUBD 2

  34. Scoreboard Example

  35. Scoreboard Example: Cycle 1 Issue LD #1 Shows in which cycle the operation occurred.

  36. Scoreboard Example: Cycle 2 LD #2 can’t issue since integer unit is busy. MULT can’t issue because we require in-order issue.

  37. Scoreboard Example: Cycle 3

  38. Scoreboard Example: Cycle 4

  39. Scoreboard Example: Cycle 5 Issue LD #2 since integer unit is now free

  40. Scoreboard Example: Cycle 6 Issue MULT

  41. Scoreboard Example: Cycle 7 MULT can’t read its operands (F2) because LD #2 hasn’t finished

  42. Scoreboard Example: Cycle 8a DIVD issues. MULT and SUBD both waiting for F2

  43. Scoreboard Example: Cycle 8b LD #2 writes F2

  44. Scoreboard Example: Cycle 9 Now MULT and SUBD can both read F2 How can both instructions do this at the same time??

  45. Scoreboard Example: Cycle 11 ADDD can’t start because Add unit is busy

  46. Scoreboard Example: Cycle 12 SUBD finishes. DIVD waiting for F0

  47. Scoreboard Example: Cycle 13 ADDD issues

  48. Scoreboard Example: Cycle 14

  49. Scoreboard Example: Cycle 15

  50. Scoreboard Example: Cycle 16

More Related