1 / 31

14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 8

14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 8. [Adapted from Dave Patterson’s UCB CS152 slides and Mary Jane Irwin’s PSU CSE331 slides]. Head’s Up. This week’s material CPU performance Reading assignment – PH 4 Building a MIPS datapath

Download Presentation

14:332:331 Computer Architecture and Assembly Language Fall 2003 Week 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 14:332:331Computer Architecture and Assembly LanguageFall 2003Week 8 [Adapted from Dave Patterson’s UCB CS152 slides and Mary Jane Irwin’s PSU CSE331 slides]

  2. Head’s Up • This week’s material • CPU performance • Reading assignment – PH 4 • Building a MIPS datapath • Reading assignment – PH 5.1-5.2 • Next week’s material • Single cycle datapath implementation • Reading assignment – PH 5.3 and C.1 through C.2

  3. Performance • Purchasing perspective • given a collection of machines, which has the • best performance ? • least cost ? • best performance / cost ? • Design perspective • faced with design options, which has the • best performance improvement ? • least cost ? • best performance / cost ? • Both require • basis for comparison • metric for evaluation • Our goal is to understand cost & performance implications of architectural choices

  4. DC to Paris Speed Passengers Throughput (pmph) 6.5 hours 610 mph 470 286,700 3 hours 1350 mph 132 178,200 Two notions of “performance” Plane Boeing 747 BAD/Sud Concodre Which has higher performance? ° Time to do the task (Execution Time) – execution time, response time, latency ° Tasks per day, hour, week, sec, ns. .. (Performance) – throughput, bandwidth Response time and throughput often are in opposition

  5. Definitions • Performance is in units of things-per-second • bigger is better • If we are primarily concerned with response time • performance(x) = 1 execution_time(x) " X is n times faster than Y" means Performance(X) n = ---------------------- Performance(Y)

  6. Example • Time of Concorde vs. Boeing 747? • Concord is 1350 mph / 610 mph = 2.2 times faster • = 6.5 hours / 3 hours • Throughput of Concorde vs. Boeing 747 ? • Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster” • Boeing is 286,700 pmph / 178,200 pmph = 1.6 “times faster” • Boeing is 1.6 times (“60%”)faster in terms of throughput • Concord is 2.2 times (“120%”) faster in terms of flying time • We will focus primarily on execution time for a single job

  7. Basis of Evaluation Cons Pros • very specific • non-portable • difficult to run, or • measure • hard to identify cause • representative Actual Target Workload • portable • widely used • improvements useful in reality • less representative Full Application Benchmarks • easy to “fool” • easy to run, early in design cycle Small “Kernel” Benchmarks • “peak” may be a long way from application performance • identify peak capability and potential bottlenecks Microbenchmarks

  8. SPEC95 • Eighteen application benchmarks (with inputs) reflecting a technical computing workload • Eight integer • go, m88ksim, gcc, compress, li, ijpeg, perl, vortex • Ten floating-point intensive • tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi, fppp, wave5 • Must run with standard compiler flags • eliminate special undocumented incantations that may not even generate working code for real programs

  9. Metrics of performance Answers per month Useful Operations per second Application Programming Language Compiler (millions) of Instructions per second – MIPS (millions) of (F.P.) operations per second – MFLOP/s ISA Datapath Megabytes per second Control Function Units Cycles per second (clock rate) Transistors Wires Pins Each metric has a place and a purpose, and each can be misused

  10. CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Aspects of CPU Performance instr. count CPI clock rate Program Compiler Instr. Set Arch. Organization Technology

  11. CPI “Average cycles per instruction” Invest Resources where time is Spent! • CPI = (CPU Time * Clock Rate) / Instruction Count • = Clock Cycles / Instruction Count n CPU time = ClockCycleTime *  CPI * I i i i = 1 n CPI =  CPI * F where F = I i i i i i = 1 Instruction Count "instruction frequency"

  12. Example (RISC processor) Base Machine (Reg / Reg) Op Freq Cycles CPI(i) % Time ALU 50% 1 .5 23% Load 20% 5 1.0 45% Store 10% 3 .3 14% Branch 20% 2 .4 18% 2.2 Typical Mix How much faster would the machine be is a better data cache reduced the average load time to 2 cycles? How does this compare with using branch prediction to shave a cycle off the branch time? What if two ALU instructions could be executed at once?

  13. Amdahl's Law Speedup due to enhancement E: ExTime w/o E Performance w/ E Speedup(E) = -------------------- = --------------------- ExTime w/ E Performance w/o E Suppose that enhancement E accelerates a fraction F of the task by a factor S and the remainder of the task is unaffected then, ExTime(with E) = ((1-F) + F/S) X ExTime(without E) Speedup(with E) = 1 (1-F) + F/S

  14. CPI Inst. Count Cycle Time Summary: Evaluating Instruction Sets? Design-time metrics: ° Can it be implemented, in how long, at what cost? ° Can it be programmed? Ease of compilation? Static Metrics: ° How many bytes does the program occupy in memory? Dynamic Metrics: ° How many instructions are executed? ° How many bytes does the processor fetch to execute the program? ° How many clocks are required per instruction? ° How "lean" a clock is practical? Best Metric: Time to execute the program! NOTE: this depends on instructions set, processor organization, and compilation techniques.

  15. Review: Design Principles • Simplicity favors regularity • fixed size instructions – 32-bits • only three instruction formats • Good design demands good compromises • three instruction formats • Smaller is faster • limited instruction set • limited number of registers in register file • limited number of addressing modes • Make the common case fast • arithmetic operands from the register file (load-store machine) • allow instructions to contain immediate operands

  16. Fetch PC = PC+4 Exec Decode The Processor: Datapath & Control • We're ready to look at an implementation of the MIPS • Simplified to contain only: • memory-reference instructions: lw, sw • arithmetic-logical instructions: add, sub, and, or, slt • control flow instructions: beq, j • Generic implementation: • use the program counter (PC) to supply the instruction address and fetch the instruction from memory (and update the PC) • decode the instruction (and read registers) • execute the instruction • All instructions (except j) use the ALU after reading the registers Why? memory-reference? arithmetic? control flow?

  17. Abstract Implementation View • Two types of functional units: • elements that operate on data values (combinational) • elements that contain state (sequential) • Single cycle operation • Split memory (Harvard) model - one memory for instructions and one for data Write Data Instruction Memory Address Read Data Register File Reg Addr Data Memory Read Data PC Address Instruction ALU Reg Addr Read Data Write Data Reg Addr

  18. falling (negative) edge cycle time rising (positive) edge Clocking Methodologies • Clocking methodology defines when signals can be read and when they can be written clock rate = 1/(cycle time) e.g., 10 nsec cycle time = 100 MHz clock rate 1 nsec cycle time = 1 GHz clock rate • State element design choices • level sensitive latch • master-slave and edge-triggered flipflops

  19. Review: State Elements • Set-reset latch • Level sensitive D latch • latch is transparent when clock is high (copies input to output) R Q !Q S clock D Q clock !Q D Q

  20. Review: State Elements, con’t • Race problem with latch based design … • Consider the case when D-latch0 holds a 0 and D-latch1 holds a 1 and you want to transfer the contents of D-latch0 to D-latch1 and vica versa • must have the clock high long enough for the transfer to take place • must not leave the clock high so long that the transferred data is copied back into the original latch • Two-sided clock constraint D Q D Q D-latch0 D-latch1 clock !Q clock !Q clock

  21. D D Q D Q Q D-latch D-latch clock clock !Q clock !Q !Q Review: State Elements, con’t • Solution is to use flipflops that change state (Q) only on clock edge (master-slave) • master (first D-latch) copies the input when the clock is high (the slave (second D-latch) is locked in its memory state and the output does not change) • slave copies the master when the clock goes low (the master is now locked in its memory state so changes at the input are not loaded into the master D-latch) • One-sided clock constraint • must have the clock cycle time long enough to accommodate the worst case delay path D clock Q

  22. Our Implementation • An edge-triggered methodology • Typical execution • read contents of some state elements • send values through some combinational logic • write results to one or more state elements • Assumes state elements are written on every clock cycle; if not, need explicit write control signal • write occurs only when both the write control is asserted and the clock edge occurs State element 1 State element 2 Combinational logic clock one clock cycle

  23. Fetching Instructions • Fetching instructions involves • reading the instruction from the Instruction Memory • updating the PC to hold the address of the next instruction • PC is updated every cycle, so it does not need an explicit write control signal • Instruction Memory is read every cycle, so it doesn’t need an explicit read control signal Add 4 Instruction Memory Read Address PC Instruction

  24. Read Addr 1 Read Data 1 Register File Read Addr 2 Write Addr Read Data 2 Write Data Decoding Instructions • Decoding instructions involves • sending the fetched instruction’s opcode and function field bits to the control unit Control Unit Instruction • reading two values from the Register File • Register File addresses are contained in the instruction

  25. 31 25 20 15 10 5 0 R-type: op rs rt rd shamt funct Executing R Format Operations • R format operations (add, sub, slt, and, or) • perform the indicated (by op and funct) operation on values in rs and rt • store the result back into the Register File (into location rd) • Note that Register File is not written every cycle (e.g. sw), so we need an explicit write control signal for the Register File RegWrite ALU control Read Addr 1 Read Data 1 Register File Read Addr 2 overflow Instruction zero ALU Write Addr Read Data 2 Write Data

  26. 31 25 20 15 0 I-Type: address offset op rs rt Executing Load and Store Operations • Load and store operations • compute a memory address by adding the base register (in rs) to the 16-bit signed offset field in the instruction • base register was read from the Register File during decode • offset value in the low order 16 bits of the instruction must be sign extended to create a 32-bit signed value • store value, read from the Register File during decode, must be written to the Data Memory • load value, read from the Data Memory, must be stored in the Register File

  27. RegWrite ALU control MemWrite overflow zero Read Addr 1 Read Data 1 Address Register File Read Addr 2 Instruction Data Memory Read Data ALU Write Addr Read Data 2 Write Data Write Data MemRead Sign Extend Executing Load and Store Operations, con’t

  28. 31 25 20 15 0 I-Type: address offset op rs rt Executing Branch Operations • Branch operations have to • compare the operands read from the Register File during decode (rs and rt values) for equality (zero ALU output) • compute the branch target address by adding the updated PC to the sign extended16-bit signed offset field in the instruction • “base register” is the updated PC • offset value in the low order 16 bits of the instruction must be sign extended to create a 32-bit signed value and then shifted left 2 bits to turn it into a word address

  29. Executing Branch Operations, con’t Branch target address Add Add 4 Shift left 2 ALU control PC zero (to branch control logic) Read Addr 1 Read Data 1 Register File Read Addr 2 Instruction ALU Write Addr Read Data 2 Write Data Sign Extend 16 32

  30. Executing Jump Operations • Jump operations have to • replace the lower 28 bits of the PC with the lower 26 bits of the fetched instruction shifted left by 2 bits 31 25 0 J-Type: jump target address op Add 4 4 Jump address Instruction Memory Shift left 2 28 Read Address PC Instruction 26

  31. Our Simple Control Structure • We wait for everything to settle down • ALU might not produce “right answer” right away • we use write signals along with the clock edge to determine when to write (to the Register File and the Data Memory) • Cycle time determined by length of the longest path We are ignoring some details like register setup and hold times

More Related