560 likes | 785 Views
CSE 520 Advanced Computer Architecture Lec 3: Role of Performance and Tracking Technology. Sandeep K. S. Gupta School of Computing and Informatics Arizona State University. Based on Slides by David Patterson and M. Younis. In response to Q on GPUs.
E N D
CSE 520 Advanced Computer Architecture Lec 3: Role of Performance and Tracking Technology Sandeep K. S. Gupta School of Computing and Informatics Arizona State University Based on Slides by David Patterson and M. Younis
In response to Q on GPUs • GPGPU – General purpose computing using Graphic Processors • See http://en.wikipedia.org/wiki/GPGPU • Note the architectural differences between general-purpose processors and GPUs • GPUs are designed to exploit Data Parallelism in Stream processing modes. • See • the tutorial http://www.gpgpu.org/s2005/ • Survey paper “A Survey of General-Purpose Computation on Graphics Hardware, J. Owens et al. http://www.blackwell-synergy.com/doi/pdf/10.1111/j.1467-8659.2007.01012.x?cookieSet=1 CSE 520 Fall 2007
Crossroads: Conventional Wisdom in Comp. Arch • Old Conventional Wisdom: Power is free, Transistors expensive • New Conventional Wisdom: “Power wall” Power expensive, Xtors free (Can put more on chip than can afford to turn on) • Old CW: Sufficiently increasing Instruction Level Parallelism via compilers, innovation (Out-of-order, speculation, VLIW, …) • New CW: “ILP wall” law of diminishing returns on more HW for ILP • Old CW: Multiplies are slow, Memory access is fast • New CW: “Memory wall” Memory slow, multiplies fast(200 clock cycles to DRAM memory, 4 clocks for multiply) • Old CW: Uniprocessor performance 2X / 1.5 yrs • New CW: Power Wall + ILP Wall + Memory Wall = Brick Wall • Uniprocessor performance now 2X / 5(?) yrs Sea change in chip design: multiple “cores” (2X processors per chip / ~ 2 years) • More simpler processors are more power efficient CSE 520 Fall 2007
Agenda • Review from Last Class • Tracking Technology • Quantifying Power Consumption CSE 520 Fall 2007
Computer Architecture is Design and Analysis • Architecture is an iterative process: • Searching the space of possible designs • At all levels of computer systems Creativity Cost / Performance Analysis Good Ideas Mediocre Ideas Bad Ideas CSE 520 Fall 2007
What Computer Architecture brings to Table • Other fields often borrow ideas from architecture • Quantitative Principles of Design • Take Advantage of Parallelism • Principle of Locality • Focus on the Common Case • Amdahl’s Law • The Processor Performance Equation • Careful, quantitative comparisons • Define, quantity, and summarize relative performance • Define and quantity relative cost • Define and quantity dependability • Define and quantity power • Culture of anticipating and exploiting advances in technology • Culture of well-defined interfaces that are carefully implemented and thoroughly checked CSE 520 Fall 2007
Chapter 1: Fundamentals of Computer Design • Quantative Principles of Design • Technology Trends: Culture of tracking, anticipating and exploiting advances in technology • Careful, quantitative comparisons: • Define, quantity, and summarize relative performance • Define and quantity relative cost • Define and quantity dependability • Define and quantity power CSE 520 Fall 2007
4) Amdahl’s Law Best you could ever hope to do: CSE 520 Fall 2007
Amdahl’s Law example • New CPU 10X faster • I/O bound server, so 60% time waiting for I/O • Apparently, its human nature to be attracted by 10X faster, vs. keeping in perspective its just 1.6X faster CSE 520 Fall 2007
CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle CPI 5) Processor performance equation inst count Inst Count CPI Clock Rate Program X Compiler X (X) Inst. Set. X X Organization X X Technology X Cycle time CSE 520 Fall 2007
And in conclusion … • Computer Architecture >> instruction sets • Computer Architecture skill sets are applicable to other Programming endeavors • Computer System at the crossroads from sequential to parallel computing • Salvation requires innovation in many fields, including computer architecture • Quantative Fundamental Priciples • Take Advantage of Parallelism • Principle of Locality • Focus on the Common Case • Amdahl’s Law • The Processor Performance Equation • Read Chapter 1, then Appendix A CSE 520 Fall 2007
The Role of Performance • Hardware performance is a key to the effectiveness of the entire system • Performance has to be measured and compared to evaluate various design and technological approaches • To optimize the performance, major affecting factors have to be known • For different types of applications, different performance metrics may be appropriate and different aspects of a computer system may be most significant • Instructions use and implementation, memory hierarchy and I/O handling are among the factors that affect the performance CSE 520 Fall 2007
Evaluate Existing Systems for Bottlenecks Implementation Complexity Benchmarks Implement Next Generation System Simulate New Designs and Organizations Workloads Computer Engineering Methodology Technology Trends Performance and cost are the main evaluation metrics for a design quality * Slide is courtesy of Dave Patterson CSE 520 Fall 2007
Defining Performance • Performance means different things to different people, therefore its assessment is subtle Analogy from the airlines industry: How to measure performance for a passenger airplane? • Cruising speed (How fast it gets to the destination) • Flight range (How far it can reach) • Passenger capacity (How many passenger it can carry) • All of the above Criteria of performance evaluation differs among users and designers CSE 520 Fall 2007
Performance Metrics • Response (execution) time: • The time between the start and the completion of a task • Measures user perception of the system speed • Common in reactive and time critical systems, single-user computer, etc. • Throughput: • The total number of tasks done in a given time • Most relevant to batch processing (billing, credit card processing, etc.) • Mainly used for input/output systems (disk access, printer, etc.) • Examples: • Replacing the processor of a computer with a faster version • Adding additional processors to a system that uses multiple processors for separate tasks (e.g. handling of airline reservations system) • Enhances BOTH response time and throughput • Improves ONLY throughput Decreasing response time always improve throughput CSE 520 Fall 2007
Response-time Metric • Maximizing performance means minimizing response (execution) time • Performance of Processor P1 is better than P2 if, for a given work load L, P1 takes less time to execute L than P2 does • Relative performance capture the performance ratio of Processor P1 compared to P2 is, for the same work load CSE 520 Fall 2007
Designer’s Performance Metrics • Users and designers measure performance using different metrics • Designers looks at the bottom line of program execution • To enhance the hardware performance, designers focuses on reducing the clock cycle time and the number of cycles per program • Many techniques to decrease the number of clock cycles also increase the clock cycle time or the average number of cycles per instruction (CPI) CSE 520 Fall 2007
Example A program runs in 10 seconds on a computer “A” that has 400 MHz clock. It is desirable to design a faster computer “B” that could run the program in 6 seconds. The designer has determined that a substantial increase in the clock speed is possible, however it would cause computer “B” to require 1.2 times as many clock cycles as computer “A”. What should be the clock rate of computer “B”? CSE 520 Fall 2007
Example To get the clock rate of the faster computer, we use the same formula CSE 520 Fall 2007
Calculation of CPU Time CPU time = Instruction count CPI Clock cycle time Or CSE 520 Fall 2007
CPU Time (Cont.) • CPU execution time can be measured by running the program • The clock cycle is usually published by the manufacturer • Measuring the CPI and instruction count is not trivial • Instruction counts can be measured by: a software profiling, using an architecture simulator, using hardware counters on some architecture • The CPI depends on many factors including: processor structure, memory system, the mix of instruction types and the implementation of these instructions • Designers sometimes uses the following formula: Where: Ci is the count of number of instructions of class i executed CPIiis the average number of cycles per instruction for that instruction class n is the number of different instruction classes CSE 520 Fall 2007
Example – Processor Performance Equation • Suppose two implementations of same ISA: • M/c A: clock cycle time = 1ns; CPI = 2.0 (for some program) • M/c B: clock cycle time = 2ns; CPI = 1.2 (for same program) Which M/c is faster? CSE 520 Fall 2007
Example – Processor Performance Equation (Cont.) • Suppose two implementations of same ISA: • M/c A: clock cycle time = 1ns; CPI = 2.0 (for some program) • M/c B: clock cycle time = 2ns; CPI = 1.2 (for same program) Which M/c is faster? • Answer: • Fact: each m/c executes same number of instructions N • Number of processor clock cycles per M/c • M/c A: N x 2.0 = 2N • M/c B: N x 1.2 = 1.2N • CPU time for each M/c • M/c A: (2N)x 1 ns = 2N ns • M/c B: (1.2N) x 2 ns = 2.4N ns • (CPU Perf. A)/(CPU Perf. B) = (CPU time B)/(CPU time A) = (2.4N ns)/(2N ns) = 1.2 • Conclusion: M/c A is 1.2 times faster than M/c B for this program. CSE 520 Fall 2007
Comparing Code Segments A compiler designer is trying to decide between two code sequences for a particular machine. The hardware designers have supplied the following facts: For a particular high-level language statement, the compiler writer is considering two code sequences that require the following instruction counts: Which code sequence executes the most instructions? Which will be faster? What is the CPI for each sequence? CSE 520 Fall 2007
Comparing Code Segments (Cont.) Which code sequence executes the most instructions? Which will be faster? What is the CPI for each sequence? • Answer: • Sequence 1: executes 2 + 1 + 2 = 5 instructions • Sequence 2: executes 4 + 1 + 1 = 6 instructions CSE 520 Fall 2007
Using the formula: Comparing Code Segments (Cont.) Using the formula: Sequence 1: CPU clock cycles = (2 1) + (1 2) + (2 3) = 10 cycles Sequence 2: CPU clock cycles = (4 1) + (1 2) + (1 3) = 9 cycles • Therefore Sequence 2 is faster although it executes more instructions Sequence 1: CPI = 10/5 = 2 Sequence 2: CPI = 9/6 = 1.5 • Since Sequence 2 takes fewer overall clock cycles but has more instructions it must have a lower CPI CSE 520 Fall 2007
Can Hardware-Independent Metrics Predict Performance? • The Burroughs B5500 machine is designed specifically for Algol 60 programs • Although CDC 6600’s programs are over 3 times as big as those of B5500, yet the CDC machine runs them almost 6 times faster • Code size cannot be used as an indication for performance CSE 520 Fall 2007
Using MIPS as a Performance Metric • MIPS stands for Million Instructions Per Second and is one of the simplest metrics, which is valid in a limited context • There are three problems with MIPS: • MIPS specifies the instruction execution rate but does not take into account the capabilities of the instructions • Computers does not have the same MIPS rating, as MIPS varies between programs on the same computer • MIPS can vary inversely with performance (see next example) The use of MIPS is simple but may lead to wrong conclusions. CSE 520 Fall 2007
Example – Misleading Results Using MIPS Consider the machine with the following three instruction classes and CPI: Now suppose we measure the code for the same program from two different compilers and obtain the following data: Assume that the machine’s clock rate is 500 MHz. Which code sequence will execute faster according to MIPS? According to execution time? CSE 520 Fall 2007
Example – Misleading Results Using MIPS (Cont.) Machine’s clock rate is 500 MHz. Which code sequence will execute faster according to MIPS? According to execution time? Answer: Using the formula: Sequence 1: CPU clock cycles = (5 1 + 1 2 + 1 3) 109 = 10109 cycles Sequence 2: CPU clock cycles = (10 1 + 1 2 + 1 3) 109 = 15109 cycles CSE 520 Fall 2007
Using the formula: Using the formula: Sequence 1: = 350 Sequence 2: = 400 Example – Misleading Results Using MIPS (Cont.) Sequence 1: Execution time = (10109)/(500106) = 20 seconds Sequence 2: Execution time = (15109)/(500106) = 30 seconds Therefore compiler 1 generates a faster program Although compiler 2 has a higher MIPS rating, the code generated by compiler 1 runs faster CSE 520 Fall 2007
Application Operations per second Programming Language User Compiler (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOP/s ISA Datapath Megabytes per second Control Designer Function Units Cycles per second (clock rate) Transistors Wires Pins Performance Metrics - Summary • Maximizing performance means minimizing response (execution) time * Figure is courtesy of Dave Patterson CSE 520 Fall 2007
Chapter 1: Fundamentals of Computer Design • Technology Trends: Culture of tracking, anticipating and exploiting advances in technology • Careful, quantitative comparisons: • Define and quantity power • Define and quantity dependability • Define, quantity, and summarize relative performance • Define and quantity relative cost CSE 520 Fall 2007
Moore’s Law: 2X transistors / “year” • “Cramming More Components onto Integrated Circuits” • Gordon Moore, Electronics, 1965 • # on transistors / cost-effective integrated circuit double every N months (12 ≤ N ≤ 24) CSE 520 Fall 2007
Tracking Technology Performance Trends • Drill down into 4 technologies: • Disks, • Memory, • Network, • Processors • Compare ~1980 Archaic (Nostalgic) vs. ~2000 Modern (Newfangled) • Performance Milestones in each technology • Compare for Bandwidth vs. Latency improvements in performance over time • Bandwidth: number of events per unit time • E.g., M bits / second over network, M bytes / second from disk • Latency: elapsed time for a single event • E.g., one-way network delay in microseconds, average disk access time in milliseconds CSE 520 Fall 2007
Seagate 373453, 2003 15000 RPM (4X) 73.4 GBytes (2500X) Tracks/Inch: 64000 (80X) Bits/Inch: 533,000 (60X) Four 2.5” platters (in 3.5” form factor) Bandwidth: 86 MBytes/sec (140X) Latency: 5.7 ms (8X) Cache: 8 MBytes CDC Wren I, 1983 3600 RPM 0.03 GBytes capacity Tracks/Inch: 800 Bits/Inch: 9550 Three 5.25” platters Bandwidth: 0.6 MBytes/sec Latency: 48.3 ms Cache: none Disks: Archaic(Nostalgic) v. Modern(Newfangled) CSE 520 Fall 2007
Performance Milestones Disk: 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (for last ~20 years) (latency = simple operation w/o contention BW = best-case) CSE 520 Fall 2007
1980 DRAM(asynchronous) 0.06 Mbits/chip 64,000 xtors, 35 mm2 16-bit data bus per module, 16 pins/chip 13 Mbytes/sec Latency: 225 ns (no block transfer) 2000Double Data Rate Synchr. (clocked) DRAM 256.00 Mbits/chip (4000X) 256,000,000 xtors, 204 mm2 64-bit data bus per DIMM, 66 pins/chip (4X) 1600 Mbytes/sec (120X) Latency: 52 ns (4X) Block transfers (page mode) Memory: Archaic (Nostalgic) v. Modern (Newfangled) CSE 520 Fall 2007
Performance Milestones Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk:3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (last ~20 years) (latency = simple operation w/o contention BW = best-case) CSE 520 Fall 2007
Ethernet 802.3 Year of Standard: 1978 10 Mbits/s link speed Latency: 3000 msec Shared media Coaxial cable "Cat 5" is 4 twisted pairs in bundle Twisted Pair: Copper, 1mm thick, twisted to avoid antenna effect LANs: Archaic (Nostalgic)v. Modern (Newfangled) • Ethernet 802.3ae • Year of Standard: 2003 • 10,000 Mbits/s (1000X)link speed • Latency: 190 msec (15X) • Switched media • Category 5 copper wire Coaxial Cable: Plastic Covering Braided outer conductor Insulator Copper core CSE 520 Fall 2007
Performance Milestones Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module:16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk:3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (last ~20 years) (latency = simple operation w/o contention BW = best-case) CSE 520 Fall 2007
1982 Intel 80286 12.5 MHz 2 MIPS (peak) Latency 320 ns 134,000 xtors, 47 mm2 16-bit data bus, 68 pins Microcode interpreter, separate FPU chip (no caches) 2001 Intel Pentium 4 1500MHz (120X) 4500 MIPS (peak) (2250X) Latency 15 ns (20X) 42,000,000 xtors, 217 mm2 64-bit data bus, 423 pins 3-way superscalar,Dynamic translate to RISC, Superpipelined (22 stage),Out-of-Order execution On-chip 8KB Data caches, 96KB Instr. Trace cache, 256KB L2 cache CPUs: Archaic (Nostalgic) v. Modern (Newfangled) CSE 520 Fall 2007
Performance Milestones Processor: ‘286, ‘386, ‘486, Pentium, Pentium Pro, Pentium 4 (21x,2250x) Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk : 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) CPU high, Memory low(“Memory Wall”) Latency Lags Bandwidth (last ~20 years) CSE 520 Fall 2007
Rule of Thumb for Latency Lagging BW • In the time that bandwidth doubles, latency improves by no more than a factor of 1.2 to 1.4 (and capacity improves faster than bandwidth) • Stated alternatively: Bandwidth improves by more than the square of the improvement in Latency CSE 520 Fall 2007
Computers in the News • “Intel loses market share in own backyard,” By Tom Krazit, CNET News.com, 1/18/2006 • “Intel's share of the U.S. retail PC market fell by 11 percentage points, from 64.4 percent in the fourth quarter of 2004 to 53.3 percent. … Current Analysis' market share numbers measure U.S. retail sales only, and therefore exclude figures from Dell, which uses its Web site to sell directly to consumers. …AMD chips were found in 52.5 percent of desktop PCs sold in U.S. retail stores during that period.” • We will technically compare AMD Opteron/Athlon vs. Intel Pentium 4 later in this course. CSE 520 Fall 2007
6 Reasons LatencyLags Bandwidth 1. Moore’s Law helps BW more than latency • Faster transistors, more transistors, more pins help Bandwidth • MPU Transistors: 0.130 vs. 42 M xtors (300X) • DRAM Transistors: 0.064 vs. 256 M xtors (4000X) • MPU Pins: 68 vs. 423 pins (6X) • DRAM Pins: 16 vs. 66 pins (4X) • Smaller, faster transistors but communicate over (relatively) longer lines: limits latency • Feature size: 1.5 to 3 vs. 0.18 micron (8X,17X) • MPU Die Size: 35 vs. 204 mm2 (ratio sqrt 2X) • DRAM Die Size: 47 vs. 217 mm2 (ratio sqrt 2X) CSE 520 Fall 2007
6 Reasons LatencyLags Bandwidth (cont’d) 2. Distance limits latency • Size of DRAM block long bit and word lines most of DRAM access time • Speed of light and computers on network • 1. & 2. explains linear latency vs. square BW? 3. Bandwidth easier to sell (“bigger=better”) • E.g., 10 Gbits/s Ethernet (“10 Gig”) vs. 10 msec latency Ethernet • 4400 MB/s DIMM (“PC4400”) vs. 50 ns latency • Even if just marketing, customers now trained • Since bandwidth sells, more resources thrown at bandwidth, which further tips the balance CSE 520 Fall 2007
6 Reasons LatencyLags Bandwidth (cont’d) 4. Latency helps BW, but not vice versa • Spinning disk faster improves both bandwidth and rotational latency • 3600 RPM 15000 RPM = 4.2X • Average rotational latency: 8.3 ms 2.0 ms • Things being equal, also helps BW by 4.2X • Lower DRAM latency More access/second (higher bandwidth) • Higher linear density helps disk BW (and capacity), but not disk Latency • 9,550 BPI 533,000 BPI 60X in BW CSE 520 Fall 2007
6 Reasons LatencyLags Bandwidth (cont’d) 5. Bandwidth hurts latency • Queues help Bandwidth, hurt Latency (Queuing Theory) • Adding chips to widen a memory module increases Bandwidth but higher fan-out on address lines may increase Latency 6. Operating System overhead hurts Latency more than Bandwidth • Long messages amortize overhead; overhead bigger part of short messages CSE 520 Fall 2007
Summary of Technology Trends • For disk, LAN, memory, and microprocessor, bandwidth improves by square of latency improvement • In the time that bandwidth doubles, latency improves by no more than 1.2X to 1.4X • Lag probably even larger in real systems, as bandwidth gains multiplied by replicated components • Multiple processors in a cluster or even in a chip • Multiple disks in a disk array • Multiple memory modules in a large memory • Simultaneous communication in switched LAN • HW and SW developers should innovate assuming Latency Lags Bandwidth • If everything improves at the same rate, then nothing really changes • When rates vary, require real innovation CSE 520 Fall 2007