210 likes | 232 Views
Learn about computer performance metrics, pipeline stages, and pipelining techniques for improving throughput. Explore the impact on latency, bandwidth, and datapath modifications in MIPS architecture.
E N D
CS-447– Computer Architecture M,W 10-11:20amLecture 13Pipelining (1) October 10th, 2007 Majd F. Sakrmsakr@qatar.cmu.edu www.qatar.cmu.edu/~msakr/15447-f07/
Computer Performance CPI inst count Cycle time CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle
Cycles Per Instruction (Throughput) “Average Cycles per Instruction” • CPI = (CPU Time * Clock Rate) / Instruction Count • = Cycles / Instruction Count “Instruction Frequency”
Example: Calculating CPI bottom up Base Machine (Reg / Reg) Op Freq Cycles CPI(i) (% Time) ALU 50% 1 .5 (33%) Load 20% 2 .4 (27%) Store 10% 2 .2 (13%) Branch 20% 2 .4 (27%) 1.5 Typical Mix of instruction types in program
A B C D Sequential Laundry 6 PM Midnight 7 8 9 11 10 • Sequential laundry takes 6 hours for 4 loads • How can we make better use of the available resources? Time 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e r
30 40 40 40 40 20 A B C D Pipelined Laundry - Start work ASAP 6 PM Midnight 7 8 9 11 10 • Pipelined laundry takes 3.5 hours for 4 loads Time T a s k O r d e r
30 40 40 40 40 20 A B C D Pipelining Lessons • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Pipeline rate limited by slowest pipeline stage • Multiple tasks operating simultaneously • Potential speedup = Number pipe stages • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup 6 PM 7 8 9 Time T a s k O r d e r
Pipelining • Doesn’t improve latency! • Execute billions of instructions, so throughputis what matters!
IFetch Dec Exec Mem WB The Five Stages of Load Instruction Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 • IFetch: Instruction Fetch and Update PC • Dec: Registers Fetch and Instruction Decode • Exec: Execute R-type; calculate memory address • Mem: Read/write the data from/to the Data Memory • WB: Write the result data into the register file lw
IFetch IFetch IFetch Dec Dec Dec Exec Exec Exec Mem Mem Mem WB WB WB Pipelined Processor • Start the next instruction while still working on the current one • improves throughputorbandwidth - total amount of work done in a given time (average instructions per second or per clock) • instruction latency is not reduced (time from the start of an instruction to its completion) • pipeline clock cycle (pipeline stage time) is limited by the slowest stage • for some instructions, some stages are wasted cycles Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 lw sw R-type
IFetch Dec Exec Mem WB IFetch Dec Exec Mem IFetch “wasted” cycles IFetch Dec Exec Mem WB IFetch Dec Exec Mem WB IFetch Dec Exec Mem WB Single Cycle, Multiple Cycle, vs. Pipeline Single Cycle Implementation: Cycle 1 Cycle 2 Clk Load Store Waste Multiple Cycle Implementation: Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10 Clk lw sw R-type Pipeline Implementation: lw sw R-type
IFetch Dec Exec Mem WB IFetch Dec Exec Mem IFetch IFetch Dec Exec Mem WB IFetch Dec Exec Mem WB IFetch Dec Exec Mem WB Multiple Cycle v. Pipeline, Bandwidth v. Latency Multiple Cycle Implementation: Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Cycle 10 Clk lw sw R-type Pipeline Implementation: lw sw R-type • Latency per lw = 5 clock cycles for both • Bandwidth of lw is 1 per clock clock (IPC) for pipeline vs. 1/5 IPC for multicycle • Pipelining improves instruction bandwidth, not instruction latency
Pipeline Datapath Modifications • What do we need to add/modify in our MIPS datapath? • registers between pipeline stages to isolate them IF:IFetch ID:Dec EX:Execute MEM: MemAccess WB: WriteBack 1 0 Add Add 4 Shift left 2 Read Addr 1 Instruction Memory Data Memory Register File Read Data 1 Read Addr 2 IFetch/Dec Read Address PC Read Data Dec/Exec Address 1 Exec/Mem Write Addr ALU Read Data 2 Mem/WB 0 Write Data 0 Write Data 1 Sign Extend 16 32 System Clock
DM Reg Reg IM ALU Graphically Representing the Pipeline Can help with answering questions like: • how many cycles does it take to execute this code? • what is the ALU doing during cycle 4?
DM DM DM DM DM Reg Reg Reg Reg Reg Reg Reg Reg Reg Reg IM IM IM IM IM ALU ALU ALU ALU ALU Time to fill the pipeline Why Pipeline? For Throughput! Time (clock cycles) Inst 0 Once the pipeline is full, one instruction is completed every cycle I n s t r. O r d e r Inst 1 Inst 2 Inst 3 Inst 4
1 2 3 4 5 Load Ifetch Reg/Dec Exec Mem Wr 1 2 3 4 R-type Ifetch Reg/Dec Exec Wr Important Observation • Each functional unit can only be used once per instruction (since 4 other instructions executing) • If each functional unit used at different stages then leads to hazards: • Load uses Register File’s Write Port during its 5th stage • R-type uses Register File’s Write Port during its 4th stage • 2 ways to solve this pipeline hazard.
Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Wr Ifetch Reg/Dec Exec Solution 1: Insert “Bubble” into the Pipeline Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 • Insert a “bubble” into the pipeline to prevent 2 writes at the same cycle • The control logic can be complex. • Lose instruction fetch and issue opportunity. • No instruction is started in Cycle 6! Clock Load R-type Pipeline R-type R-type Bubble
Ifetch Reg/Dec Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Mem Wr Ifetch Reg/Dec Exec Mem Wr Solution 2: Delay R-type’s Write by One Cycle • Delay R-type’s register write by one cycle: • Now R-type instructions also use Reg File’s write port at Stage 5 • Mem stage is a NOP stage: nothing is being done. 4 1 2 3 5 R-type Exec Mem Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Cycle 8 Cycle 9 Clock R-type R-type Load R-type R-type
Can Pipelining Get Us Into Trouble? • Yes:Pipeline Hazards • structural hazards: attempt to use the same resource by two different instructions at the same time • data hazards: attempt to use data before it is ready • instruction source operands are produced by a prior instruction still in the pipeline • load instruction followed immediately by an ALU instruction that uses the load operand as a source value • control hazards: attempt to make a decision before condition has been evaluated • branch instructions • Can always resolve hazards by waiting • pipeline control must detect the hazard • take action (or delay action) to resolve hazards
Reading data from memory Mem Mem Mem Mem Mem Reg Reg Reg Reg Reg Reg Reg Reg Reg Reg Mem Mem Mem Mem Mem ALU ALU ALU ALU ALU Reading instruction from memory A Single Memory Would Be a Structural Hazard Time (clock cycles) lw I n s t r. O r d e r Inst 1 Inst 2 Inst 3 Inst 4
Summary • All modern day processors use pipelining • Pipelining doesn’t help latency of single task, it helps throughput of entire workload • Multiple tasks operating simultaneously using different resources • Potential speedup = Number of pipe stages • Pipeline rate limited by slowest pipeline stage • Unbalanced lengths of pipe stages reduces speedup • Time to “fill” pipeline and time to “drain” it reduces speedup • Must detect and resolve hazards • Stalling negatively affects throughput