230 likes | 251 Views
This presentation outlines the collaborative research efforts between IBM and UC Berkeley over the past three decades, focusing on topics such as relational databases, reduced instruction set computers, redundant arrays of inexpensive disks, and future research trends. It delves into key projects like the Interactive Graphics & Retrieval System (INGRES) and the development of RAID (Redundant Array of Inexpensive Disks). The presentation also discusses recent projects in autonomic computing and recovery-oriented computing, emphasizing the importance of long-term research and adapting to the evolving technological landscape. The future outlook includes exploring the push of technology, customer demands, and potential funding models for efficient research endeavors.
E N D
±30 Years of IBM / U.C. Berkeley Synergy in Research Dave Patterson October 2005 Pardee Professor of Computer Science, UC Berkeley President, Association for Computing Machinery
Outline • Last 30 years of IBM/UC Berkeley research synergy • Relational Data Bases • Reduced Instruction Set Computers • Redundant Arrays of Inexpensive • Next 30 years • Topics: Technology push, Customer pull • New modes of pursuing research in 21st century
“It's slow, but it gets there.'' Interactive Graphics & Retrieval System (INGRES) 1973-79 • Codd publishes relational model in 1970 + debate • Inspired System R at IBM and INGRES at Berkeley • System R led to SQL • INGRES + System R people technical leadership of many data base companies • $15B year industry 2005 • 1988 ACM Systems SW Award for Ingres & System R System R: Chamberlin, Astrahan, Blasgen, Gray*, King, Lindsay*, Lorie, Mehl, Price, Putzolu, Selinger, Schkolnick*, Slutz, Traiger*, Wade, Yost UC: Mike Stonebraker, Gene Wong,Eric Allman, Bob Epstein*, Paula Hawthorn*, Jerry Held*, Carol Youseffi*, + 26 others *UC Berkeley PhD
RISC Reduced Instr. Set Computers ’80-’85 • John Cocke Compiler-oriented, pipelined architecture 24-bit ECL minicomputer, PL.8 • Mead/Conway VLSI + VAX RISC I & II at UC32-bit microprocessor for C, Unix • Stanford MIPS: 32-bit microprocessor for Pascal • Commercial CPUs: ARM, Power, MIPS, SPARC, … • RISC sales 2005 ~ 109 embedded RISC MPUs IBM:John Cocke, Fran Allen, George Radin, Mark Auslander, … UC: David Patterson, Carlo Sequin, + ~ 16 students Stanford: John Hennessy + ~ 8 students
Close Up: Porsche on RISC chip Used cars to evangelize RISC (sports car) vs. CISC (Cadillac)
Redundant Array of Inexpensive Disks (1987-93) • “Use PC disks to build fast, reliable I/O to pace RISC?” • RAID I • Sun 4/280, 128 MB of DRAM, • 4 dual-string SCSI controllers, • 28 5.25” 340 MB disks + SW • RAID II • Gbit/s net + 144 3.5” 320 MB disks • 1st Network Attached Storage • Hagar (IBM Almaden Research) • Non-volatile caching, fault tolerance, distributed spares, EVENODD codes, … • IBM AS/400: RAID 5 patent • “Case for RAID” spread like virus • Products from IBM, Compaq, EMC, … • Today RAID ~ $15B industry; 80% of server disks in RAID IBM: Mike Mitoma, Jai Menon, Jim Brady, … UC: Randy Katz, David Patterson, Peter Chen, Ann Chevernak, Garth Gibson, Ed Lee, Ethan Miller, … RAID
More Recent Projects (too soon to tell) • Autonomic Computing at IBM • Automatically managed data centers • Self-* (optimize, repair, protect, …) • Recovery-Oriented Computing at Berkeley and Stanford • More dependable by recovering fast • Fast-* (notice error fast, diagnose error fast, fix error fast, reboot fast)
+30 Years of Research • Push of Technology • Pull of Customer Demand • New modes of long-term research • How do more efficiently? • New funding models?
Push of Technology • Statistics and IT • Statistical and Machine Learning as new AI • Statistics/randomization and Theory • Statistical and Machine Learning to Understand Large Systems • Understand behavior from millions of measurements
Push of Technology • Future is parallel • CS 2.0 - Time to rethink programming languages, environments, OS, … • We’ve heard this before; what’s different this time?
Conventional Wisdom (CW) in Computer Architecture • Old CW: Multiplies are slow, loads are fast • New CW: Memory slow (140 clocks to DRAM) • Old CW: Power is free, Transistors expensive • New CW: Power is expensive, Transistors free • Can put more on chip than can afford to turn on • Old CW: Uniprocessor performance 2X / 1.5 yrs • New CW: Power Wall + Memory Wall = Brick Wall • Uniprocessor performance only 2X / 5 yrs • New CW: 2X CPUs per socket / ~ 2 years since • More simpler processors are more power efficient
Massively Parallel Socket (mMMP) • Processor is new transistor? • Intel 4004 (1971): 4-bit processor,2312 transistors, 0.4 MHz, 10 micron PMOS, 11 mm2 chip • RISC II (1983): 32-bit, 5 stage pipeline, 40,760 transistors, 3 MHz, 3 micron NMOS, 60 mm2 chip • 4004 shrinks to ~ 1 mm2 at 3 micron • 125 mm2 chip, 65 nm CMOS = 2312 RISC IIs + Icache + Dcache • RISC II shrinks to ~ 0.02 mm2 at 65 nm • Caches via DRAM or 1 transistor SRAM (www.t-ram.com) ? • Proximity Communication via capacitive coupling at > 1 TB/s ?(Ivan Sutherland @ Sun / Berkeley)
Pull of Customer Demand • Cost-of-purchase/Performance • 20th Century Homerun • Benchmarks critical to making progress • What neglected while pursuing Cost-Performance? • Dependability - PCs drop memory parity • Cost of Ownership much larger than Cost of Purchase • In 2004, 1% of U.S. households were victims of successful phishing attacks. • 17% of businesses received threats of being shut down by denial-of-service attacks
Pull of Customer Demand • SPUR supersedes Cost-Performance in 21st century • Security/Privacy – safe to use, store • As safe as 20th century banking? • Usability – cost of ownership • Ownership/purchase ratio = 20th century radio? • Reliability – really works • As reliable as 20th century telephony • Need benchmarks involving people to make progress, as people are major challenge in each aspect of SPUR
New Models of Pursuing Research? • 20th-Century model of success Research: • Long-term industrial research + Gov’t funded long-term academic research • NAE report documents academic-industry synergy leads to 17 1B$+ industries
State of Research Funding Today • Most industry research shorter term • DARPA exiting long-term (experimental) IT research • ’03-’05 BAAs IPTO: 9 AI, 2 classified, 1 SW radio, 1 sensor net, 1 reliability, all have 12 to 18 month “go/no go” milestones • Academic led funding reduced 50% (so far) 2001 to 2004 • Faculty ≈ consultants in consortia led by defense contractor, get grants ≈ support 1-2 students (~ NSF funding level) • NSF swamped with proposals, conservative • 2000 to 6500 proposals in 5 years • IT has lowest acceptance rate at NSF (between 8% to 16%) • “Ambitious proposal” is a negative review • Even if get NSF funding, proposal reduced to stretch NSF $ e.g., we got 3 x 1/3 faculty, 6 grad students, 0 staff, 3 years • (To learn more, see www.cra.org/research)
DARPA Grand Challenge 10/8/05 • Autonomous vehicles complete 132 mile off-road course < 10 hours • Stanford 6 hrs, 53 min • CMU Red 7 hrs, 4 min • CMU Red Too 7 hrs, 14 min • Gray Insurance 7 hrs, 30 min • Gov’t-funded, academic researchers Top 3 spots in open competition • Defense contractors did not finish: 5th, 6th, 11th, 16th, 17th, 23rd / 23
Research More Efficiently? • Share common infrastructure (e.g., BSD Unix) vs. everyone do-it-themselves? • For example, recent RAMP project (Research Accelerator for Multiple Processors) • FPGA today = ~25 CPUs; 2X CPUs/18 months; MPP? • Berkeley, CMU, MIT, Stanford, Texas, Wash. agree on common multiboard FPGA platform • Single hardware design, make boards for participants • Share development of “gateware”, SW, documentation • Better ideas via cooperation than do-it-yourself? • 6 groups work evolve specifications, share work • Support from Xilinx, NSF infrastructure prop. • Attractive platform for HW & SW researchers? • New research standard, like BSD Unix?
Why RAMP Attractive for Research? SMP, Cluster, Simulator v. RAMP
New Research Funding Model? • Replicate research centers based primarily on industrial funding to expand IT market (and to train next generation of IT leaders) • Exciting, long term technical vision • Industry largely funds • N companies, where N is 1 to 10? • Berkeley Wireless Research Center (BWRC): @ $5M per year (50% industry, ~5-10 companies) • Stanford Network Research Center (SNRC): @ $5M per year (80% industry, ~5-10 companies) • MIT Tparty $4M per year (100% $ from 1: Quanta) • How far can centers scale? • Maybe only afford 20 Top 20 CS departments?
Summary • Past 30 yrs: IBM Research & Berkeley together helped create 3 10B$ industries • Relational DB, RISC, RAID • In 20th century Industry and Academia created 17 1B$+ IT industries (11 IBM and/or Berkeley) • Challenges to creating more $1B+ industries in next 30 yrs • Push: Statistics & CS, Parallelism • Pull: Security, Privacy, Usability, Reliability • Drop in funding for long term research => More participation and collaboration between academia and industry in 21st Century