220 likes | 367 Views
Welcome. June 1998 NOW Finale David E. Culler 6/15/98. NOW Finale. NOW Project Timeline. ATM, fddi. G-Ether. Myrinet. VIA. Start of Funding. SCI. 6/94. 1/95. 6/95. 1/96. 6/96. 1/97. 6/97. 1/98. 6/98. 1/94. Many PhDs. NOW 0. 1st PhD. 2nd PhD. NOW I. NOW II. Inktomi.
E N D
Welcome June 1998 NOW Finale David E. Culler 6/15/98
NOWFinale NOW Project Timeline ATM, fddi G-Ether Myrinet VIA Start of Funding SCI 6/94 1/95 6/95 1/96 6/96 1/97 6/97 1/98 6/98 1/94 Many PhDs NOW 0 1st PhD 2nd PhD NOW I NOW II Inktomi Case for NOW NOW Sort Asplos Workshop I Asplos Workshop II NPACI CS 252 CS 258 CS 267 CS 267
Metrics of Success • Project goals? • Papers published? • Technology transfer? • Adoption of approach in the real world? • Students produced? • Marriages? • Research results? • Unexpected research results? • All of the above?
Project Goals • Fundamental change in how we design large-scale computing systems • snap together commodity components • self-managing, self-tuning, highly available • Make the “killer network” real • realize the potential of emerging hardware technology • and push its effect through the rest of the system • Integrated system on a building-wide scale • pool of resources (proc, disk mem) • remote processor and memory closer than local disk • federation of systems with local and global role • The right way to build internet services
Unix Workstation Unix Workstation Unix (Solaris) Workstation Scheduler VN segment Driver VN segment Driver VN segment Driver AM L.C.P. AM L.C.P. AM L.C.P. NOW Software Components Parallel Apps Large Seq. Apps Sockets, Split-C, MPI, HPF, vSM Active Messages Name Svr Global Layer Unix Unix Workstation VN segment Driver AM L.C.P. Myrinet Scalable Interconnect
NOW publications • Over 40 papers and counting • wide range of important venues • IEEE Micro, ACM TOCS, ISCA, ASPLOS, SOSP, SIGMETRICS, OSDI, SIGMOD, SPAA, SC, IPPS/SPDP, JSPP, USENIX, Hot Interconnects, SW Prac. and Exp., SPDT, HPCA, … • countless presentations
NOW Students • Moved on • Mike Dahlin (UT), Steve Rodriguez (NetApp), Steve Luna (HP), Lok Tin Liu (Intel), Cedric Krumbein (Microsoft) • Moving on • Doug Ghormley (Sandia), Randy Wang (Princeton), Amin Vahdat (Duke), Andrea Arpaci-Dusseau (Stanford), Steve Lumetta (UIUC), Rich Martin (Rutgers) • Finishing • Remzi Arpaci-Dusseau, Satoshi Asami, Alan Mainwaring, Jeanna Neefe Mathews, Drew Roselli, Nisha Talagala • On to other projects in CS • Brent Chun, Kim Keeton, Chad Yoshikawa, Fred Wong • and several undergrads • Josh Coates, Alec Woo, Eric Schein, ...
Comm. Performance => Evaluation • Demonstrated on LogP micro-benchmarks with GAM • Rich Martin (9:25) Sensitivity to Network Characteristics From “NOW Communication Architecture” Jan 1994 Retreat
Novel System Design Techniques • Andrea Arpaci-Dusseau (9:50) Implicit Coscheduling: From Simulation To Implementation And Back Again From “On Self-organizing systems,” June 1995 Retreat
Understanding Parallel Appln Perf. • Frederick Wong (10:25) Understanding Application Scaling: NAS Parallel Benchmarks on the NOW and SGI Origin 2000 From “Case for NOW” Jan 1994 Retreat
Fast Parallel I/O • Remzi Arpaci-Dusseau & Eric Anderson (10:50) Robust I/O Performance in River
Automatic Network Mapping • Lab Tour
Scalable Services • Wingman/NOW transcoding proxy demo Information appliances Stationary desktops Scalable Servers
Virtual Networks • Alan Mainwaring (1:00) Communication Retrospectives From Jan 1994 Retreat
New look at File Systems • Drew Roselli (1:25) Huge File Traces • Mike Dahlin (1:50) xFS and Beyond • Randy Wang (2:45) Intelligent Disks
Cluster Design • Steve Lumetta (3:10) Trends in Cluster Architectures From Jan 1994 Retreat
Vast, Cheap Storage • Nisha Talagala and Satoshi Asami (3:35) Large-scale Storage Devices
Beyond Clusters • Amin Vahdat (3:50) WebOS: Infrastructure for World-Wide Computing
New Scale and New Technology • Matt Welsh, Millennium • Philip Buonodonna, VIA • Eric Brewer, The Pro-active Infrastructure
Many Thanks • To all of you visitors for coming • and for guiding us through many retreats • and for tremendous support • To the CS division • an environment that made it possible • To an incredible group of students who made NOW a successful project • by any metric • I think you will enjoy these final presentations