260 likes | 376 Views
CENG 450 Computer Systems & Architecture Lecture 3. Amirali Baniasadi amirali@ece.uvic.ca. Performance. Purchasing perspective given a collection of machines, which has the best performance ? least cost ? best performance / cost ? Design perspective
E N D
CENG 450Computer Systems & ArchitectureLecture 3 Amirali Baniasadi amirali@ece.uvic.ca
Performance • Purchasing perspective • given a collection of machines, which has the • best performance ? • least cost ? • best performance / cost ? • Design perspective • faced with design options, which has the • best performance improvement ? • least cost ? • best performance / cost ? • Both require • basis for comparison • metric for evaluation • Our goal is to understand cost & performance implications of architectural choices
Plane DC to Paris Speed Passengers Throughput Boeing 747 6.5 hours 610 mph 470 286,700 BAD/Sud Concorde 3 hours 1350 mph 132 178,200 Two notions of “performance” Which has higher performance? ° Time to do the task (Execution Time) – execution time, response time, latency ° Tasks per day, hour, week, sec, ns. .. – throughput, bandwidth Response time and throughput often are in opposition
Example • Time of Concorde vs. Boeing 747? • Concord is 1350 mph / 610 mph = 2.2 times faster • = 6.5 hours / 3 hours • Throughput of Concorde vs. Boeing 747 ? • Concord is 178,200 pmph / 286,700 pmph = 0.62 “times faster” • Boeing is 286,700 pmph / 178,200 pmph = 1.6 “times faster” • Boeing is 1.6 times (“60%”)faster in terms of throughput • Concord is 2.2 times (“120%”) faster in terms of flying time • We will focus primarily on execution time for a single job
Definitions • Performance is in units of things-per-second • bigger is better • If we are primarily concerned with response time • performance(x) = 1 execution_time(x) " X is n times faster than Y" means Performance(X) n = ---------------------- Performance(Y)
A B C W(1) W(2) W(3) P1 1 10 20 .5 .9 .99 P2 1000 100 20 .5 .1 .01 Performance measurement • How about collection of programs? • Example: Three machines: A, B and C. Two Programs: P1 and P2. Arithmetic mean: Weight i * Time i W(1) 500.5 55 20 W(2) 91.9 18 20 W(3) 2 10 20
Performance measurement • Other option: Geometric Means (Self study pages 37-39 text book)
Metrics of performance Answers per month Operations per second Application Programming Language Compiler (millions) of Instructions per second – MIPS (millions) of (F.P.) operations per second – MFLOP/s ISA Datapath Megabytes per second Control Function Units Cycles per second (clock rate) Transistors Wires Pins
Relating Processor Metrics • CPU execution time = CPU clock cycles X clock cycle time • or CPU execution time = CPU clock cycles ÷ clock rate • CPU clock cycles= Instructions X avg. clock cycles per instr. • or CPI = CPU clock cycles÷ Instructions • CPI tells us something about the Instruction Set Architecture, the Implementation of that architecture, and the program measured
CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Aspects of CPU Performance instr. count CPI clock rate Program Compiler Instr. Set Arch. Organization Technology
CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Aspects of CPU Performance instr count CPI clock rate Program X Compiler X (x) Instr. Set. X X Organization X X Technology X
Organizational Trade-offs Application Programming Language Compiler ISA Instruction Mix Datapath CPI Control Function Units Transistors Wires Pins Cycle Time
CPI “Average cycles per instruction” • CPI = (CPU Time * Clock Rate) / Instruction Count • = Clock Cycles / Instruction Count Invest Resources where time is Spent! n CPU time = ClockCycleTime * SUMCPI * I i i i = 1 "instruction frequency" n CPI = SUM CPI * F where F = I i i i i i = 1 Instruction Count
Example (RISC processor) Base Machine (Reg / Reg) Op Freq Cycles CPI(i) % Time ALU 50% 1 .5 23% Load 20% 5 1.0 45% Store 10% 3 .3 14% Branch 20% 2 .4 18% 2.2 Typical Mix How much faster would the machine be if a better data cache reduced the average load time to 2 cycles? How does this compare with using branch prediction to shave a cycle off the branch time? What if two ALU instructions could be executed at once?
Example (RISC processor) Base Machine (Reg / Reg) Op Freq Cycles CPI(i) % Time ALU 50% 1 .5 23% Load 20% 5 1.0 45% Store 10% 3 .3 14% Branch 20% 2 .4 18% 2.2 Typical Mix How much faster would the machine be if: A) Loads took “0” cycles? B) Stores took “0” cycles? C) ALU ops took “0” cycles? D)Branches took “0” cycles? MAKE THE COMMON CASE FAST
Amdahl's Law Speedup due to enhancement E: ExTime w/o E Performance w/ E Speedup(E) = -------------------- = --------------------- ExTime w/ E Performance w/o E Suppose that enhancement E accelerates a fraction F of the task by a factor S and the remainder of the task is unaffected then, ExTime(with E) = ((1-F) + F/S) X ExTime(without E) Speedup(with E) = ExTime(without E) ÷ ((1-F) + F/S) X ExTime(without E) Speedup(with E) =1/ ((1-F) + F/S)
Amdahl's Law-example A new CPU makes Web serving 10 times faster. The old CPU spent 40% of the time on computation and 60% on waiting for I/O. What is the overall enhancement? Fraction enhanced= 0.4 Speedup enhanced = 10 Speedup overall = 1 = 1.56 0.6 +0.4/10
Example from Quiz 1-2004 • a)A program consists of 80% initialization code and of 20% code being the main iteration loop, which is run 1000 times. The total runtime of the program is 100 seconds. Calculate the fraction of the total run time needed for the initialization and the iteration. Which part would you optimize? B) The program should have a total run time of 60 seconds. How can this be achieved? (15 points)
Marketing Metrics • MIPS = Instruction Count / Time * 10^6 • = Clock Rate / CPI * 10^6 • machines with different instruction sets ? • programs with different instruction mixes ? • dynamic frequency of instructions • uncorrelated with performance • GFLOPS = FP Operations / Time * 10^9 playstation: 6.4 GFLOPS • machine dependent • often not where time is spent
Why Do Benchmarks? • How we evaluate differences • Different systems • Changes to a single system • Provide a target • Benchmarks should represent large class of important programs • Improving benchmark performance should help many programs • For better or worse, benchmarks shape a field • Good ones accelerate progress • good target for development • Bad benchmarks hurt progress • help real programs v. sell machines/papers? • Inventions that help real programs don’t help benchmark
Basis of Evaluation Cons • very specific • non-portable • difficult to run, or • measure • hard to identify cause • representative Actual Target Workload • portable • widely used • improvements useful in reality • less representative Full Application Benchmarks • easy to “fool” Small “Kernel” Benchmarks • easy to run, early in design cycle • “peak” may be a long way from application performance • identify peak capability and potential bottlenecks Microbenchmarks
Successful Benchmark: SPEC • 1987 RISC industry mired in “bench marketing”:(“That is 8 MIPS machine, but they claim 10 MIPS!”) • EE Times + 5 companies band together to perform Systems Performance Evaluation Committee (SPEC) in 1988: Sun, MIPS, HP, Apollo, DEC • Create standard list of programs, inputs, reporting: some real programs, includes OS calls, some I/O
SPEC first round • First round 1989; 10 programs, single number to summarize performance • One program: 99% of time in single line of code • New front-end compiler could improve dramatically
SPEC95 • Eighteen application benchmarks (with inputs) reflecting a technical computing workload • Eight integer • go, m88ksim, gcc, compress, li, ijpeg, perl, vortex • Ten floating-point intensive • tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi, fppp, wave5 • Must run with standard compiler flags • eliminate special undocumented incantations that may not even generate working code for real programs
CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Summary • Time is the measure of computer performance! • Good products created when have: • Good benchmarks • Good ways to summarize performance • If not good benchmarks and summary, then choice between improving product for real programs vs. improving product to get more sales=> sales almost always wins • Remember Amdahl’s Law: Speedup is limited by unimproved part of program
Readings & More… Reminder: READ: TEXTBOOK: Chapter 1 pages 1 to 47 Moore paper (posted on course web site).