260 likes | 382 Views
MAMAS – Computer Architecture 234367 Lecture 7 – Out Of Order (OOO). Avi Mendelson. Some of the slides were taken from: (1) Lihu Rapoport (2) Randi Katz and (3) Petterson. Go beyond IPC=1. Superscalar VLIW OOO. Can we improve performance ?.
E N D
MAMAS – Computer Architecture234367Lecture 7 – Out Of Order (OOO) Avi Mendelson Some of the slides were taken from: (1) Lihu Rapoport (2) Randi Katz and (3) Petterson
Go beyond IPC=1 • Superscalar • VLIW • OOO
Can we improve performance ? • In theory “data flow machines” has the best performance • View the program as a parallel operations wait to be executed (will be demonstrate next slide) • Execute instruction as soon as its inputs are ready • So, why computers are Von-Neumann based and not Data-Flow based • Hard to debug • Hard to write Data-Flow programs (need special programming language in order to be efficient)
Data Flow Graph 1 3 4 2 5 6 Data flow execution – a different approach for high performance computers • Data flow execution is an alternative for Van-Neumann execution. Here, the instructions are executed in the order of their input dependencies and not in the order they appears in the program • Example: assume that we have as many execution units as we need: (1) r1 r4 / r7 (2) r8r1 + r2 (3) r5 r5 + 1 (4) r6 r6 - r3 (5) r4 r5 + r6 (6) r7 r8 * r4 We could execute it in 3 cycles
Data flow execution - cont • Can we build a machine that will execute the “data flow graph”? • In the early 70th several machines were built to work according to the data-flow graph. They were called “data flow machines”. They were vanished due to the reasons we mentioned before. • Solution: Let the user think he/she are using Van-Neumann machine, and let the system work in “data-flow mode”
Instruction pool Fetch & Decode Retire (commit) In-order In-order OOOE - General Scheme Most of the modern computers are using OOO execution. Most of them are doing the fetching and the retirement IN-ORDER, but it executes in OUT_OF_ORDER Execute Out-of-order
Out Of Order Execution Basic idea: • The fetch is done in the program order (in-order) and fast enough in order to “fill-out” a window of instructions. • Out of the instruction window, the system forms a data flow graph and looks for instructions which are ready to be executed: • All the data the instructions are depended on, are ready • Resources are available. • As soon as the instruction is execution it needs to signal to all the instructions which are depend on it that it generate new input. • The instructions are commit in “program’s order” to preserve the “user view” • Advantages: • Help exploit Instruction Level Parallelism (ILP) • Help cover latencies (e.g., cache miss, divide)
How to convert “In-order” instruction flow into “data flow” • The problems: • Data Flow has only RAW dependencies, while OOOE has also WAR and WAW dependencies (as we showed in the last Lecture 5) • How to guarantee the in-order completion. • The Solutions: • Register Renaming (based on “Tomasulo algorithm”) solves the WAR and WAW dependencies • We need to “enumerate” the instructions at decode time (in order) so we know in what order to retire them
Register Renaming • Hold a pool of physical registers. • Architectural registers are mapped into physical registers • When an instruction writes to an architectural register • A free physical register is allocated from the pool • The physical register points to the architectural register • The instruction writes the value to the physical register • When an instruction reads from an architectural register • reads the data from the latest instruction which writes to the same architectural register, and precedes the current instruction. • If no such instruction exists, read directly from the architectural register. • When an instruction commits • Moves the value from the physical register to the architectural register it points.
WAW WAW WAR OOOE with Register Renaming: Example Before renaming After renaming (1) r1 mem1 t1 mem1 (2) r2 r2 + r1 t2 r2 + t1 (3) r1 mem2 t3 mem2 (4) r3 r3 + r1 t4 r3 + t3 (5) r1 mem3 t5 mem3 (6) r4 r5 + r1 t6 r5 + t5 (7) r5 2 t7 2 (8) r6 r5 + 2 t8 t7 + 2 After renaming all the false dependencies (WAW and WAR) were removed
The magic of the modern X86 architectures (Intel, AMD, etc.) • The user view of the X86 machine is as a CISC architecture. • The machine supports this view by keeping the in-order parts as close as possible to the X86 view. • While moving from the In-order part (front-end) to the OOO part (execution), the hardware translates each X86 instruction into a set of uop operations, which are the internal machine operations. These operations are RISC like (load-store based). • During this translation, the hardware performs the register renaming. So, during the execution time it uses internal registers and not the X86 ones. The number of these registers can be changed from one generation to another. • While moving back from the OOO part (execution) to the In-Order part (commit), the hardware translates the registers back to X86, in order to keep for the user a coherent picture.
Bus Interface Unit Instructioncache Datacache IFUInstr. Fetch Load/Store Operations MOB IDInstr. Decodeand rename Arithmetic Operations RS RAT ROB Retire (commit) Logic OOOE Architecture: based on Pentium-II • In Order Front-end • Fetch from instruction cache, base on Branch prediction. • Decode and rename: • Translate to Uops • Use the RAT table for renaming • Put ALL instructions in ROB • Put all “arithmetic instructions” in the RS queue • Put all Load/Store instructions in MOB • Out-Of-Order • Do in Parallel: • Load and store operations are executed based on MOB information • Arithmetic operations are executed based on RS information. • All results are written back to ROB, while RS and MOB “steal” values they need • In Order • The retire logic (commit logic) moves instructions out of the ROB and updates the architectural registers Write back bus
Re-order Buffer (ROB) • Mechanism for keeping the in-order view of the user. • Basic ROB functions • Provide large physical register space for register renaming • Keeps intermediate results, some of them may not be commit if the branch prediction was wrong (we will discuss this mechanism later on) • Keeps information on what is the “Real Register” the commit need to update
The renaming Algorithm • Each uop allocate a new entry in the ROB. • The entries are allocated in the program’s order • The RAT (register aliasing table) keeps a table that indicates for any architectural register, if the program was executed in-order, what uop (ROB entry) will generate its value. • Every uop that generate value(s) (to register and/or flag) will update the RAT table. • For every input for the uop, we look who is responsible for generate the value. If translation exist in the RAT, we indicate that the value will be retrieved from uop in that ROB entry. If translation does not exist, we retrieve the value from the “architectural register” - RRF (Real Register File)
Reservation station (RS) • Pool of all “not yet executed” uops • Holds both the uop attributes as well as the values of the input data • For each operand, it keeps indication if it is ready • Operand that need to be retrieved from the RRF is always ready • Operand that waits for another Uop to generate its value, will “lesson” to the WB bus. When the value appears on the bus (the value is always associated with the ROB number it needs to update), all RS entries how need to consume this value, “still” it from the bus and mark the input as ready (this is done in parallel to the ROB update. • Uops whose all operands are ready can be dispatched for execution • Dispatcher chooses which of the ready uops to execute next. If can also do “forwarding”; i.e., schedule the instruction at the same cycle the information is written to the RS entry. • As soon as Uop completes its execution, it is deleted from the RS. • If the RS is full, it stalls the decoder
Memory Order Buffer (MOB) • Goal – Manipulates the Load and Store operations. If possible, it allows out-of-order among memory operations • Structure similar in concept to ROB • Every memory uop allocates new entry in-order. • Address need to be updated when known • Problem- Memory dependencies cannot be fully resolved statically (memory disambiguation) • store r1,a; load r2,b can advance load before store • store r1,[r3]; load r2,b load should wait till r3 is known • In most of the modern processors, Loads may pass loads/stores but Stores must be execute in order (among stores). • For simplicity, this course assumes that all MOB operations are executed in order.
RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire
RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 R1 <- R1+R0 R2 <- R3 LD R1,X ◄
RAT I5 R0 R1 R2 R3 I4 Instruction Q ROB MOB RS Execute Retire RB0 R1 <- R1+R0 R2 <- R3 ◄ LD R1,X LD R1,X M0 LD RB0,X Takes 3 cycles
RAT I5 R0 R1 R2 R3 I4 Instruction Q ROB MOB RS Execute Retire RB0 R1 <- R1+R0 ◄ RB1 R2 <- R3 LD R1,X LD R1,X M0 RB1 <- R3 R2 <- R3 RS0 LD RB0,X
RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 ◄ RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X M0 RB1 <- R3 R2 <- R3 RS0 LD RB0,X RB2 <- RB0+R0 R1 <- R1+R0 RS1 RB1 <- R3
RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 ◄ RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X M0 R2 <- R3 O.K LD RB0,X RB2 <- RB0+R0 Got the value now R1 <- R1+R0 RS1 I4 I4 Cannot execute since the data is not ready yet
RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire ◄ I5 I4 RB2 R1 <- R1+R0 RB1 R2 <- R3 LD R1,X LD R1,X OK R2 <- R3 OK R1 <- RB0+R0 RS1 RB2 <- RB0+R0 I4 I4 RS2 I5 I5 RS3 RB2 <- RB0+R0
◄ RAT R0 R1 R2 R3 Instruction Q ROB MOB RS Execute Retire I5 I4 RB3 R1 <- R1+R0 R2 <- R3 LD R1,X I6 R1 <- RB1+R0 OK I4 I4 rs2 I5 rs3 I5 I6 rs0 I4 I5 R2 <- R3 LD R1,X