1k likes | 1.15k Views
Internet-2, NGI and TEN-155 Lessons for the (European) Academic and Research Community. David Williams CERN - IT Division Supercomputer’98 Mannheim David.O.Williams@cern.ch Slides: http://nicewww.cern.ch/~davidw/public/Mannheim/Mannheim.ppt. What about me?. Other side of the Röstigraben
E N D
Internet-2, NGI and TEN-155Lessons for the (European)Academic and Research Community David Williams CERN - IT Division Supercomputer’98 Mannheim David.O.Williams@cern.ch Slides: http://nicewww.cern.ch/~davidw/public/Mannheim/Mannheim.ppt
What about me? • Other side of the Röstigraben • CERN • Never a networking specialist • No longer a manager • ICFA-NTF • EU • So this talk is all my own views, but not all my own work • Too much to tell you about
Outline of this Talk • Does the Internet work? • Applications • Technical changes • America • Internet-2 -- NGI • Europe • TEN-34 and successors -- 5FP • What are the lessons? • Networks for supercomputer users • The (general-purpose) Internet in Europe
(How well)Does the (academic) Internet work? Not as well as it should
April 1998 Packet Loss (extract) US university from Fermi from CERN brown.edu 16.15 17.29 umass.edu 14.92 15.77 pitt.edu 10.71 15.52 harvard.edu 8.06 9.48 uoregon.edu 4.180.33 washington.edu 2.970.30 princeton.edu 1.505.14 cmu.edu 1.49 14.61 hawaii.edu 1.18 0.70 duke.edu 0.306.60 umd.edu 0.165.07 indiana.edu 0.14 15.25 mit.edu 0.02 0.27
NSF Awards NLANR Group at UCSD $2.08 Million for Measurement and Analysis of Internet Infrastructure (1/2) • UNIVERSITY OF CALIFORNIA, SAN DIEGO -- May 27, 1998 • The National Science Foundation has awarded $2.08 million over 30 months to the University of California, San Diego to monitor and analyze the continent-wide research network that is a key component of the next generation of Internet technologies.
NLANR Award (2/2) • The award establishes the Measurement and Operations Analysis Team (MOAT) as a formal group within the National Laboratory for Applied Networking Research to analyze network traffic patterns and traffic behavior, evaluate service models, and conduct research to enhance the NSF's very high performance Backbone Network Service • See http://www.npaci.edu/online/ and http://moat.nlanr.net/ for more info
CERN to SLAC Monitoring over 1 week in December 1997 Above (left) is Round trip time (about 180 ms) Below (right) is packet loss rate (very low)
CERN to Uni Tokyo - same week Sat Sun RTT 350-500ms Big daily peaks Packet loss worse Some 20-30% samples One bad peak
Things do improve - CERN to Tokyo May 98 RTT at 300-400ms Daily peak effect much less Packet loss quite reasonable
Daily packet loss structure on a congested route 8 quiet hours at night From ~01.00 to 09.00 CET 50% peaks Thurs Tue Fri Mon Wed Sat Sun Sun
Packet loss rate - Fermilab to Brown University - April 98 Performance inside the US can be bad too!
With RTT - same period - CERN to Brown RTT = 250 to 400+ msec Compare 180 msec to SLAC
HEP networking seen from Canada NACSIS CAnet DFN ESnet TEN-34
Lessons so far? • Not everything is wonderful in the USA • A few universities badly connected, even some very rich and famous ones • Everything depends on the detailed routing • InterNet = Interconnects + Nets • We know how to build the nets well • We have not yet learned how to build the interconnects as well
Three factors for the future Technology Economics Organisation
Application generations • First = what we are used to • Second = what is about to enter general production • Third = interesting, but needs lots of bandwidth • PLUS • Interactive = human intimately in loop
First generation apps • e-mail • few 10s of packets; tolerant of high packet loss; non-interactive • Web access • quite similar; not really very interactive • manually initiated file transfer • more packets, but not very interactive • telnet, X-window • interactive; people start to get very disturbed with delays ~>5 sec; sensitive to packet loss rates
First generation apps (summary) • So far, so good • But not very adventurous • Fortunately only the interactive telnet and X traffic is very sensitive to packet loss
Second generation apps (1) • streaming video and audio for individuals • lots of data • quite strong RT constraints • intolerant of packet loss (esp. audio) • groupware for collaboration at a distance • fundamentally important for collaborative science • shared software development (here or next?) • brainstorming (shared whiteboards, good access to Web, incl. graphics, video and audio, ...) • weekly meetings (10 people, 5 locations, …) • must be easy to use, reliable, good performance
Second generation apps (2) • automated data access and transfer • contrast with manually initiated file xfer • automated file transfer systems for ‘production’ • basic remote super-computer services • general client-server systems • shared file systems (e.g. AFS) • form a special subset • starting to become part of the “normally expected” infrastructure in HEP • need reasonable bandwidth and reliability • so none to Japan, quite a bit across Atlantic, but less than inside Europe and inside USA
Second generation apps (summary) • When they can be widely deployed they will bring fundamental improvements to collaborative science on national, European, and inter-national levels • But they do require significant improvements in the reliability of the links being used (more bandwidth, lower packet loss)
Third generation apps • Collaboratories and Advanced groupware • trying to break the distance barrier for individuals who work together • Remote control rooms • telescopes or physics experiments or ….. • trying to break the distance barrier for teams looking after complex technical equipment • Remote VR or other very advanced graphics • trying to break the distance barrier for people working with computers
Third generation apps (summary) • No shortage of interesting ideas • Probably? a shortage of bandwidth • Or of money to pay for it • Or of means to make everything work on an inter-continental basis
Bandwidth (1/2) • Presently drive laser signals down fibre optic cables • (Optically) amplify them every ~100 km • Lay cables under sea, on power lines, in conduits along roads and railways • Fibre itself not expensive -- O(1 DEM/m) • Undersea amplifiers have to consume very little power, which comes down cable (10 kV DC across the Atlantic) and operate unattended for 25 years. So only 4 pairs per cable, cf 24, 48, and more overland. This is the reason why undersea bandwidth is inherently more expensive today.
Bandwidth (2/2) • Signal processing today is electronic. 2.5 Gbps is standardly available, 10 Gbps a little expensive, but coming into use. • Big movement is to use multiple wavelengths. 4x in regular use by carriers, 32x seems almost here, some people guess that 1024x is feasible • Can you do everything optically?? Who knows?
Switches and routers • These are basically specialised computers, and benefit from the overall improvement in computer technology • Critical functions being incorporated in specialised electronics • High bandwidth backplanes/switching fabrics • There seem to be no barriers to progress • And a lot of very smart people and companies hoping to make money from Tbps routers
Service levels • Perhaps the next “religious war” • Telecoms people talk about “quality of service” and know that ATM provides it, while Internet fans now talk about “differentiated service” (instead of “integrated services”), and are thinking about who can tell what makes a service different, and how you can profit from the knowledge
Some history • Agencies and universities • NSFnet and 1994-95 • No NRN • UCAID
Agencies and universities • It is important (for Europeans) to understand that in the USA the financing of the various agencies of the federal government (such as NSF, DoE, DOD, NIH,..) is entirely separate from the funding of the universities, which is either on a private or state basis. • Although you can find some parallels in Europe (Germany in particular) the separation in the US seems to be very strong. • This all means that until now there has been no national A&R network in the US
NSFnet and 1994-95 • The NSFnet played a key role in the development of the Internet during the period from ~1986 until 1994/95. It was effectively the overall backbone, interconnecting the various agency networks, and its capacity was upgraded from 56 kbps to 1.5 Mbps to 45 Mbps during that time. • It was decided to “privatise” this function, and that took place during 1994/95, and NSFnet had been decommissioned by 31 March 1995.
No NRN • The NSFnet backbone was “replaced” by whichever Internet Service Provider (ISP) the university took its business to. • Most (and essentially all of the research universities) chose the InternetMCI service of MCI, who had run the NSFnet. • 95 and 96 were years of very fast growth of the general purpose Internet in the US (home and business use was booming). • Lots of congestion and frustration, leading in October 1996 to the first Internet-2 meetings
Internet-2 • This statement may not be PC (in the US), but Internet-2 is a good approximation to the NRN that the US has never had • Though Internet-2 per se has no (direct) relations with or connection to the national labs • Top priority is on production services, and not on research into networking • Though they do have plans to encourage advanced apps and to investigate and deploy QoS features
UCAID • The University Corporation for Advanced Internet Development is, to all intents and purposes, the supervisory board of a national university network • Responsible for Internet-2 • 122 (research) universities are members • 14 corporate partners (AT&T, 3Com, ANS, Bay, Cabletron, Cisco, Fore, IBM, Lucent, MCI, Newbridge, Nortel, Qwest, StarBurst) • Chief executive is Doug van Houweling
GigaPoPs (1/2) • One theme of Internet-2, which I personally find very interesting, is to construct GigaPoPs (which officially stands for Gbps point-of-presence) • All sites wanting to connect in a given region (city, state, …) connect to a commonly located, operated and funded Point of Presence • Makes it far simpler, and far more competitive, for different backbone providers to connect up GigaPops • Also simplifies inter-connection with “other networks” (such as ESnet) with decent performance
GigaPoPs (2/2) • Instead of universities and research institutes needing to get to each carrier’s Point of Presence, GigaPoPs allow the universities to specify where the carriers must come to • It would be a good idea (according to me) for the (A&R community in) European countries, regions and cities to invest in such EuroPoPs or UserPoPs
vBNS backbone • Until recently the de facto backbone for I2 was the NSF’s vBNS, provided by MCI • The very-high-performance Backbone Network Service was initially created as a fast interconnect between NSF’s Supercomputer Centres • It basically provides 622 Mbps links • During 1997/98 it has been transformed into the Connections Program and some 100? universities are now connected to it
More competition • Recently (15 April 98) UCAID announced that Abilene will form an alternative backbone for Internet-2 • Abilene is based on the use of the Qwest fibre optic network, with equipment from Cisco and Nortel (Northern Telecom) • I have seen essentially no discussion of the financial terms, but it is clear that some of this is supported financially by the three companies concerned
More on Qwest • Qwest Teams with Cisco to Build the Next Generation of High-Speed Voice/Data Networks • Dr. Shafei explained that Qwest's network starts with 48 fibers (with the capability to add ten times as many fibers through additional in-place conduits), bidirectional, line switching OC-192 ring SONET nationwide network. Each fiber can carry 8 wave division multiplexing (WDM) windows, where each WDM window has a bandwidth of 10 gigabits per second, providing Qwest with the potential of a multi terabit-per-second capability. • From a 30 Sept 1997 Press Release
Overview (1/2) • NGI initiative = multi-agency federal agency R&D programme for:- • developing advanced network technologies • developing revolutionary apps needing advanced nets • demonstrating via testbeds 100x-1000x faster end-to-end than today’s Internet • i.e. not the universities (directly) • Started 1 October 1997 (FY’98) • Normally said to be 3-year program (I have seen 5) • DARPA, NASA, NIH, NIST, NSF all involved • DoE from FY’99??
Overview (2/2) • This is the real place where the “leading-edge” R&D is being performed • But… insofar as NSF and hence vBNS are part of NGI, the project plays an important role in getting Internet-2 off the ground • Impressive (to me) how far the politicians (Gore et al, but not only him) have understood the economic importance of Internet evolution • One of my worries in Europe….