1.33k likes | 1.8k Views
CSCE 930 Advanced Computer Architecture Introductions . Adopted from Professor David Patterson & David Culler Electrical Engineering and Computer Sciences University of California, Berkeley. Outline. Computer Science at a Crossroads: Parallelism Architecture: multi-core and many-cores
E N D
CSCE 930 Advanced Computer Architecture Introductions Adopted from Professor David Patterson & David Culler Electrical Engineering and Computer Sciences University of California, Berkeley
Outline • Computer Science at a Crossroads: Parallelism • Architecture: multi-core and many-cores • Program: multi-threading • Parallel Architecture • What is Parallel Architecture? • Why Parallel Architecture? • Evolution and Convergence of Parallel Architectures • Fundamental Design Issues • Parallel Programs • Why bother with programs? • Important for whom? • Memory & Storage Subsystem Architectures CSCE930-Advanced Computer Architecture, Introduction
Crossroads: Conventional Wisdom in Comp. Arch • Old Conventional Wisdom: Power is free, Transistors expensive • New Conventional Wisdom: “Power wall” Power expensive, Xtors free (Can put more on chip than can afford to turn on) • Old CW: Sufficiently increasing Instruction Level Parallelism via compilers, innovation (Out-of-order, speculation, VLIW, …) • New CW: “ILP wall” law of diminishing returns on more HW for ILP • Old CW: Multiplies are slow, Memory access is fast • New CW: “Memory wall” Memory slow, multiplies fast(200 clock cycles to DRAM memory, 4 clocks for multiply) • Old CW: Uniprocessor performance 2X / 1.5 yrs • New CW: Power Wall + ILP Wall + Memory Wall = Brick Wall • Uniprocessor performance now 2X / 5(?) yrs Sea change in chip design: multiple “cores” (2X processors per chip / ~ 2 years) • More simpler processors are more power efficient CSCE930-Advanced Computer Architecture, Introduction
Crossroads: Uniprocessor Performance From Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4th edition, October, 2006 • VAX : 25%/year 1978 to 1986 • RISC + x86: 52%/year 1986 to 2002 • RISC + x86: ??%/year 2002 to present CSCE930-Advanced Computer Architecture, Introduction
Sea Change in Chip Design • Intel 4004 (1971): 4-bit processor,2312 transistors, 0.4 MHz, 10 micron PMOS, 11 mm2 chip • RISC II (1983): 32-bit, 5 stage pipeline, 40,760 transistors, 3 MHz, 3 micron NMOS, 60 mm2 chip • 125 mm2 chip, 0.065 micron CMOS = 2312 RISC II+FPU+Icache+Dcache • RISC II shrinks to ~ 0.02 mm2 at 65 nm • Caches via DRAM or 1 transistor SRAM (www.t-ram.com) ? • Proximity Communication via capacitive coupling at > 1 TB/s ?(Ivan Sutherland @ Sun / Berkeley) • Processor is the new transistor? CSCE930-Advanced Computer Architecture, Introduction
Déjà vu all over again? • Multiprocessors imminent in 1970s, ‘80s, ‘90s, … • “… today’s processors … are nearing an impasse as technologies approach the speed of light..” David Mitchell, The Transputer: The Time Is Now (1989) • Transputer was premature Custom multiprocessors strove to lead uniprocessors Procrastination rewarded: 2X seq. perf. / 1.5 years • “We are dedicating all of our future product development to multicore designs. … This is a sea change in computing” Paul Otellini, President, Intel (2004) • Difference is all microprocessor companies switch to multiprocessors (AMD, Intel, IBM, Sun; all new Apples 2 CPUs) Procrastination penalized: 2X sequential perf. / 5 yrs Biggest programming challenge: 1 to 2 CPUs CSCE930-Advanced Computer Architecture, Introduction
Problems with Sea Change • Algorithms, Programming Languages, Compilers, Operating Systems, Architectures, Libraries, … not ready to supply Thread Level Parallelism or Data Level Parallelism for 1000 CPUs / chip, • Architectures not ready for 1000 CPUs / chip • Unlike Instruction Level Parallelism, cannot be solved by just by computer architects and compiler writers alone, but also cannot be solved without participation of computer architects • This course explores ISL (Instruction Level Parallelism) and its shift to Thread Level Parallelism / Data Level Parallelism CSCE930-Advanced Computer Architecture, Introduction
Outline • Computer Science at a Crossroads: Parallelism • Architecture: multi-core and many-cores • Program: multi-threading • Parallel Architecture • What is Parallel Architecture? • Why Parallel Architecture? • Evolution and Convergence of Parallel Architectures • Fundamental Design Issues • Parallel Programs • Why bother with programs? • Important for whom? • Memory & Storage Subsystem Architectures CSCE930-Advanced Computer Architecture, Introduction
What is Parallel Architecture? • A parallel computer is a collection of processing elements that cooperate to solve large problems fast • Some broad issues: • Resource Allocation: • how large a collection? • how powerful are the elements? • how much memory? • Data access, Communication and Synchronization • how do the elements cooperate and communicate? • how are data transmitted between processors? • what are the abstractions and primitives for cooperation? • Performance and Scalability • how does it all translate into performance? • how does it scale? CSCE930-Advanced Computer Architecture, Introduction
Why Study Parallel Architecture? • Role of a computer architect: • To design and engineer the various levels of a computer system to maximize performance and programmability within limits of technology and cost. • Parallelism: • Provides alternative to faster clock for performance • Applies at all levels of system design • Is a fascinating perspective from which to view architecture • Is increasingly central in information processing CSCE930-Advanced Computer Architecture, Introduction
Why Study it Today? • History: diverse and innovative organizational structures, often tied to novel programming models • Rapidly maturing under strong technological constraints • The “killer micro” is ubiquitous • Laptops and supercomputers are fundamentally similar! • Technological trends cause diverse approaches to converge • Technological trends make parallel computing inevitable • In the mainstream with the reality of multi-cores and many-cores • Need to understand fundamental principles and design tradeoffs, not just taxonomies • Naming, Ordering, Replication, Communication performance CSCE930-Advanced Computer Architecture, Introduction
Inevitability of Parallel Computing • Application demands: Our insatiable need for computing cycles • Scientific computing: VR simulations in Biology, Chemistry, Physics, ... • General-purpose computing: Video, Graphics, CAD, Databases, AR, VI, TP... • Technology Trends • Number of cores on chip growing rapidly (New Moors Law) • Clock rates expected to go up only slowly (tech. wall) • Architecture Trends • Instruction-level parallelism valuable but limited • Coarser-level parallelism, or thread-level parallelism, the most viable approach • Economics • Current trends: • Today’s microprocessors are multiprocessors CSCE930-Advanced Computer Architecture, Introduction
Performance (p processors) Performance (1 processor) Application Trends • Demand for cycles fuels advances in hardware, and vice-versa • Cycle drives exponential increase in microprocessor performance • Drives parallel architecture harder: most demanding applications • Range of performance demands • Need range of system performance with progressively increasing cost • Platform pyramid • Goal of applications in using parallel machines: Speedup • Speedup (p processors) = • For a fixed problem size (input data set), performance = 1/time • Speedup fixed problem (p processors) = Time (1 processor) Time (p processors) CSCE930-Advanced Computer Architecture, Introduction
Scientific Computing Demand CSCE930-Advanced Computer Architecture, Introduction
Engineering Computing Demand • Large parallel machines a mainstay in many industries • Petroleum (reservoir analysis) • Automotive (crash simulation, drag analysis, combustion efficiency), • Aeronautics (airflow analysis, engine efficiency, structural mechanics, electromagnetism), • Computer-aided design • Pharmaceuticals (molecular modeling) • Visualization • In all of the above • Entertainment (3D films like Avatar & 3D games ) • Architecture (walk-throughs and rendering) • Virtual Reality/Immersion (museums, teleporting, etc) • Financial modeling (yield and derivative analysis) • Etc. CSCE930-Advanced Computer Architecture, Introduction
Learning Curve for Parallel Applications • AMBER molecular dynamics simulation program • Starting point was vector code for Cray-1 • 145 MFLOP on Cray90, 406 for final version on 128-processor Paragon, 891 on 128-processor Cray T3D CSCE930-Advanced Computer Architecture, Introduction
Commercial Computing • Also relies on parallelism for high end • Scale not so large, but use much more wide-spread • Computational power determines scale of business that can be handled • Databases, online-transaction processing, decision support, data mining, data warehousing ... • TPC benchmarks (TPC-C order entry, TPC-D decision support) • Explicit scaling criteria provided • Size of enterprise scales with size of system • Problem size no longer fixed as p increases, so throughput is used as a performance measure (transactions per minute or tpm) CSCE930-Advanced Computer Architecture, Introduction
Similar Story for Storage • Divergence between memory capacity and speed more pronounced • Capacity increased by 1000x from 1980-95, speed only 2x • Gigabit DRAM by c. 2000, but gap with processor speed much greater • Larger memories are slower, while processors get faster • Need to transfer more data in parallel • Need deeper cache hierarchies • How to organize caches? • Parallelism increases effective size of each level of hierarchy, without increasing access time • Parallelism and locality within memory systems too • New designs fetch many bits within memory chip; follow with fast pipelined transfer across narrower interface • Buffer caches most recently accessed data • Disks too: Parallel disks plus caching CSCE930-Advanced Computer Architecture, Introduction
Real-world applications demand high-performing and reliable storage High performance Computing Medicinal Image >100TB >100TB Digital body 1TB/body NASA H.B Glass 5GB/day >1PB
>1PB GIS Ocean resource data >1PB Google, Yahoo, … >1PB/year >1PB Oil prospecting 1PB=1000TB=1015Bytes, It is equal to the capacity of 10,000 100GB disks.
Technology Trends: Moore’s Law: 2X transistors / “year” 2X cores / “year” • “Cramming More Components onto Integrated Circuits” • Gordon Moore, Electronics, 1965 • # on transistors / cost-effective integrated circuit double every N months (12 ≤ N ≤ 24) CSCE930-Advanced Computer Architecture, Introduction
Tracking Technology Performance Trends • Drill down into 4 technologies: • Disks, • Memory, • Network, • Processors • Compare ~1980 Archaic (Nostalgic) vs. ~2000 Modern (Newfangled) • Performance Milestones in each technology • Compare for Bandwidth vs. Latency improvements in performance over time • Bandwidth: number of events per unit time • E.g., M bits / second over network, M bytes / second from disk • Latency: elapsed time for a single event • E.g., one-way network delay in microseconds, average disk access time in milliseconds CSCE930-Advanced Computer Architecture, Introduction
Seagate 373453, 2003 15000 RPM (4X) 73.4 GBytes (2500X) Tracks/Inch: 64000 (80X) Bits/Inch: 533,000 (60X) Four 2.5” platters (in 3.5” form factor) Bandwidth: 86 MBytes/sec (140X) Latency: 5.7 ms (8X) Cache: 8 MBytes CDC Wren I, 1983 3600 RPM 0.03 GBytes capacity Tracks/Inch: 800 Bits/Inch: 9550 Three 5.25” platters Bandwidth: 0.6 MBytes/sec Latency: 48.3 ms Cache: none Disks: Archaic(Nostalgic) v. Modern(Newfangled) CSCE930-Advanced Computer Architecture, Introduction
Performance Milestones Disk: 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (for last ~20 years) (latency = simple operation w/o contention BW = best-case) CSCE930-Advanced Computer Architecture, Introduction
1980 DRAM(asynchronous) 0.06 Mbits/chip 64,000 xtors, 35 mm2 16-bit data bus per module, 16 pins/chip 13 Mbytes/sec Latency: 225 ns (no block transfer) 2000Double Data Rate Synchr. (clocked) DRAM 256.00 Mbits/chip (4000X) 256,000,000 xtors, 204 mm2 64-bit data bus per DIMM, 66 pins/chip (4X) 1600 Mbytes/sec (120X) Latency: 52 ns (4X) Block transfers (page mode) Memory: Archaic (Nostalgic) v. Modern (Newfangled) CSCE930-Advanced Computer Architecture, Introduction
Performance Milestones Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk:3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (last ~20 years) (latency = simple operation w/o contention BW = best-case) CSCE930-Advanced Computer Architecture, Introduction
Ethernet 802.3 Year of Standard: 1978 10 Mbits/s link speed Latency: 3000 msec Shared media Coaxial cable "Cat 5" is 4 twisted pairs in bundle Twisted Pair: Copper, 1mm thick, twisted to avoid antenna effect LANs: Archaic (Nostalgic)v. Modern (Newfangled) • Ethernet 802.3ae • Year of Standard: 2003 • 10,000 Mbits/s (1000X)link speed • Latency: 190 msec (15X) • Switched media • Category 5 copper wire Coaxial Cable: Plastic Covering Braided outer conductor Insulator Copper core CSCE930-Advanced Computer Architecture, Introduction
Performance Milestones Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module:16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk:3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) Latency Lags Bandwidth (last ~20 years) (latency = simple operation w/o contention BW = best-case) CSCE930-Advanced Computer Architecture, Introduction
1982 Intel 80286 12.5 MHz 2 MIPS (peak) Latency 320 ns 134,000 xtors, 47 mm2 16-bit data bus, 68 pins Microcode interpreter, separate FPU chip (no caches) 2001 Intel Pentium 4 1500MHz (120X) 4500 MIPS (peak) (2250X) Latency 15 ns (20X) 42,000,000 xtors, 217 mm2 64-bit data bus, 423 pins 3-way superscalar,Dynamic translate to RISC, Superpipelined (22 stage),Out-of-Order execution On-chip 8KB Data caches, 96KB Instr. Trace cache, 256KB L2 cache CPUs: Archaic (Nostalgic) v. Modern (Newfangled) CSCE930-Advanced Computer Architecture, Introduction
Performance Milestones Processor: ‘286, ‘386, ‘486, Pentium, Pentium Pro, Pentium 4 (21x,2250x) Ethernet: 10Mb, 100Mb, 1000Mb, 10000 Mb/s (16x,1000x) Memory Module: 16bit plain DRAM, Page Mode DRAM, 32b, 64b, SDRAM, DDR SDRAM (4x,120x) Disk : 3600, 5400, 7200, 10000, 15000 RPM (8x, 143x) CPU high, Memory low(“Memory Wall”) Latency Lags Bandwidth (last ~20 years) CSCE930-Advanced Computer Architecture, Introduction
Rule of Thumb for Latency Lagging BW • In the time that bandwidth doubles, latency improves by no more than a factor of 1.2 to 1.4 (and capacity improves faster than bandwidth) • Stated alternatively: Bandwidth improves by more than the square of the improvement in Latency CSCE930-Advanced Computer Architecture, Introduction
6 Reasons LatencyLags Bandwidth 1. Moore’s Law helps BW more than latency • Faster transistors, more transistors, more pins help Bandwidth • MPU Transistors: 0.130 vs. 42 M xtors (300X) • DRAM Transistors: 0.064 vs. 256 M xtors (4000X) • MPU Pins: 68 vs. 423 pins (6X) • DRAM Pins: 16 vs. 66 pins (4X) • Smaller, faster transistors but communicate over (relatively) longer lines: limits latency • Feature size: 1.5 to 3 vs. 0.18 micron (8X,17X) • MPU Die Size: 35 vs. 204 mm2 (ratio sqrt 2X) • DRAM Die Size: 47 vs. 217 mm2 (ratio sqrt 2X) CSCE930-Advanced Computer Architecture, Introduction
6 Reasons LatencyLags Bandwidth (cont’d) 2. Distance limits latency • Size of DRAM block long bit and word lines most of DRAM access time • Speed of light and computers on network • 1. & 2. explains linear latency vs. square BW? 3. Bandwidth easier to sell (“bigger=better”) • E.g., 10 Gbits/s Ethernet (“10 Gig”) vs. 10 msec latency Ethernet • 4400 MB/s DIMM (“PC4400”) vs. 50 ns latency • Even if just marketing, customers now trained • Since bandwidth sells, more resources thrown at bandwidth, which further tips the balance CSCE930-Advanced Computer Architecture, Introduction
6 Reasons LatencyLags Bandwidth (cont’d) 4. Latency helps BW, but not vice versa • Spinning disk faster improves both bandwidth and rotational latency • 3600 RPM 15000 RPM = 4.2X • Average rotational latency: 8.3 ms 2.0 ms • Things being equal, also helps BW by 4.2X • Lower DRAM latency More access/second (higher bandwidth) • Higher linear density helps disk BW (and capacity), but not disk Latency • 9,550 BPI 533,000 BPI 60X in BW CSCE930-Advanced Computer Architecture, Introduction
6 Reasons LatencyLags Bandwidth (cont’d) 5. Bandwidth hurts latency • Queues help Bandwidth, hurt Latency (Queuing Theory) • Adding chips to widen a memory module increases Bandwidth but higher fan-out on address lines may increase Latency 6. Operating System overhead hurts Latency more than Bandwidth • Long messages amortize overhead; overhead bigger part of short messages CSCE930-Advanced Computer Architecture, Introduction
Summary of Technology Trends • For disk, LAN, memory, and microprocessor, bandwidth improves by square of latency improvement • In the time that bandwidth doubles, latency improves by no more than 1.2X to 1.4X • Lag probably even larger in real systems, as bandwidth gains multiplied by replicated components • Multiple processors in a cluster or even in a chip • Multiple disks in a disk array • Multiple memory modules in a large memory • Simultaneous communication in switched LAN • HW and SW developers should innovate assuming Latency Lags Bandwidth • If everything improves at the same rate, then nothing really changes • When rates vary, require real innovation CSCE930-Advanced Computer Architecture, Introduction
Architectural Trends • Architecture translates technology’s gifts to performance and capability • Resolves the tradeoff between parallelism and locality • Current microprocessor: 1/4 compute, 1/2 cache, 1/4 off-chip connect • Tradeoffs may change with scale and technology advances • Four generations of architectural history: tube, transistor, IC, VLSI • Here focus only on VLSI generation • Greatest delineation in VLSI has been in type of parallelism exploited CSCE930-Advanced Computer Architecture, Introduction
Architectural Trends • Greatest trend in VLSI generation is increase in parallelism • Up to 1985: bit level parallelism: 4-bit -> 8 bit -> 16-bit • slows after 32 bit • adoption of 64-bit now under way, 128-bit far (not performance issue) • great inflection point when 32-bit micro and cache fit on a chip • Mid 80s to mid 90s: instruction level parallelism • pipelining and simple instruction sets, + compiler advances (RISC) • on-chip caches and functional units => superscalar execution • greater sophistication: out of order execution, speculation, prediction • to deal with control transfer and latency problems • Next step: thread level parallelism CSCE930-Advanced Computer Architecture, Introduction
Architectural Trends: ILP • Reported speedups for superscalar processors • Horst, Harris, and Jardine [1990] ...................... 1.37 • Wang and Wu [1988] .......................................... 1.70 • Smith, Johnson, and Horowitz [1989] .............. 2.30 • Murakami et al. [1989] ........................................ 2.55 • Chang et al. [1991] ............................................. 2.90 • Jouppi and Wall [1989] ...................................... 3.20 • Lee, Kwok, and Briggs [1991] ........................... 3.50 • Wall [1991] .......................................................... 5 • Melvin and Patt [1991] ....................................... 8 • Butler et al. [1991] ............................................. 17+ • Large variance due to difference in • application domain investigated (numerical versus non-numerical) • capabilities of processor modeled CSCE930-Advanced Computer Architecture, Introduction
ILP Ideal Potential • Infinite resources and fetch bandwidth, perfect branch prediction and renaming • real caches and non-zero miss latencies CSCE930-Advanced Computer Architecture, Introduction
Results of ILP Studies • Concentrate on parallelism for 4-issue machines • Realistic studies show only 2-fold speedup • Recent studies show that more ILP needs to look across threads CSCE930-Advanced Computer Architecture, Introduction
Architectural Trends: Bus-based MPs • Micro on a chip makes it natural to connect many to shared memory • dominates server and enterprise market, moving down to desktop • Faster processors began to saturate bus, then bus technology advanced • today, range of sizes for bus-based systems, desktop to large servers CSCE930-Advanced Computer Architecture, Introduction No. of processors in fully configured commercial shared-memory systems
Economics • Commodity microprocessors not only fast but CHEAP • Development cost is tens of millions of dollars (5-100 typical) • BUT, many more are sold compared to supercomputers • Crucial to take advantage of the investment, and use the commodity building block • Exotic parallel architectures no more than special-purpose • Multiprocessors being pushed by software vendors (e.g. database) as well as hardware vendors • Standardization by Intel makes small, bus-based SMPs commodity • Desktop: few smaller processors versus one larger one? • Multiprocessor on a chip CSCE930-Advanced Computer Architecture, Introduction
Consider Scientific Supercomputing • Proving ground and driver for innovative architecture and techniques • Market smaller relative to commercial as MPs become mainstream • Dominated by vector machines starting in 70s • Microprocessors have made huge gains in floating-point performance • high clock rates • pipelined floating point units (e.g., multiply-add every cycle) • instruction-level parallelism • effective use of caches (e.g., automatic blocking) • Plus economics • Large-scale multiprocessors replace vector supercomputers • Most top-performing machines on Top-500 list are multiprocessors CSCE930-Advanced Computer Architecture, Introduction
Summary: Why Parallel Architecture? • Increasingly attractive • Economics, technology, architecture, application demand • Increasingly central and mainstream • Parallelism exploited at many levels • Instruction-level parallelism • Thread-level of parallelism (multi-cores and many-cores) • Application/Node-level of parallelism (“MPPs”, cluster, grids, clouds) • Focus of this class: thread-level of parallelism • Same story from memory/storage system perspective but with a “twist” (data-intensive computing, etc) • Wide range of parallel architectures make sense • Different cost, performance and scalability CSCE930-Advanced Computer Architecture, Introduction
Convergence of Parallel Architectures CSCE930-Advanced Computer Architecture, Introduction
History • Historically, parallel architectures tied to programming models • Divergent architectures, with no predictable pattern of growth. Application Software System Software Systolic Arrays SIMD Architecture Message Passing Dataflow Shared Memory • Uncertainty of direction paralyzed parallel software development! CSCE930-Advanced Computer Architecture, Introduction
Today • Extension of “computer architecture” to support communication and cooperation • OLD: Instruction Set Architecture • NEW: Communication Architecture • Defines • Critical abstractions, boundaries, and primitives (interfaces) • Organizational structures that implement interfaces (hw or sw) • Compilers, libraries and OS are important bridges today CSCE930-Advanced Computer Architecture, Introduction
CAD Database Scientific modeling Parallel applications Multipr ogramming Shar ed Message Data Pr ogramming models addr ess passing parallel Compilation Communication abstraction or library User/system boundary Operating systems support Har dwar e/softwar e boundary Communication har dwar e Physical communication medium Modern Layered Framework CSCE930-Advanced Computer Architecture, Introduction
Programming Model • What programmer uses in coding applications • Specifies communication and synchronization • Examples: • Multiprogramming: no communication or synch. at program level • Shared address space: like bulletin board • Message passing: like letters or phone calls, explicit point to point • Data parallel: more regimented, global actions on data • Implemented with shared address space or message passing CSCE930-Advanced Computer Architecture, Introduction