670 likes | 952 Views
Highlights from the recent Esnet International Meeting in Paris. Manuel Delfino April 9, 1999. Topics that were touched but which I will not cover to avoid repetition or because of scope. Status of TEN-155 Developments in Multicasting Medical, weather, fusion, etc. applications ICFA SCIC
E N D
Highlights from the recent Esnet International Meeting in Paris Manuel DelfinoApril 9, 1999
Topics that were touched but which I will not cover to avoid repetition or because of scope • Status of TEN-155 • Developments in Multicasting • Medical, weather, fusion, etc. applications • ICFA SCIC • On the topic of Asia: All I mention is concerns about Europe-Asia interconnect
ESnet International Meeting James Leighton Department Head Networking & Telecommunications Dept. Computing Sciences Directorate Lawrence Berkeley National Laboratory Paris, 17 Feb, 1999
CURRENT STATUSStatistics • Dec, 1998 • 32.4 Giga-packets accepted • 12.9 Tera-bytes accepted • 398 bytes/packet (average packet size) • Dec, 1997 • 15.6 Giga-packets accepted • 6.98 Tera-bytes accepted • 447 bytes/packet (average packet-size) 6Jan99
OTHER TOPICSProcurement Activities • Sprint Contract Extension • Have signed a two-year contract extension with Sprint • Will extend current contract to Aug. ‘01 • Contract includes ATM port charge reductions for FY99 and additional reductions for FY00 and FY01 • Follow-on Procurement underway (ESnet3) • Will overlap Sprint contract, roughly from FY00-04 • Will ask for both production and advance capabilities • one goal is SSI/P support up to 1Tbps by ‘03 • Completed 1st vendor briefings during Jan ‘99 • Plan to launch formal RFP April ‘99 • Plan for contract signature by fall ‘99
CURRENT ISSUESUniversity Access • Shifting from reliance on vBNS • vBNS contract with NSF expires Mar ‘00 • no intent to renew/rebid (?) • but something else may be going on … • Focus now on peerings with Abilene & GigaPoPs • First GPoP up with (CalREN2-North) @ OC12 • Will do interim peering with Abilene • Peering with CalREN2-South underway @ T3 • First peering with Abilene up via Chicago Pop @ OC3 • Planning on peering with Atlanta GPoP at T3 • Will peer with Abilene across ATM GPoP 6Feb99
CURRENT ISSUESSite Upgrades • ANL & LBNL: Upgraded to OC12c ATM • OC12c successfully completed numerous tests • Currently able to test LBL<>ANL up to 570Mbps • Cut over to production 3 Feb, ‘99 7Jan99
RESEARCH & ADV. TECH.Current Activities (1/3) • Differentiated Services • testing & evaluating backbone technologies • e.g. traffic shaping, queue management, policing, classifiers, marking, etc. • looking at overall initial architecture for ESnet DS approach • considering a low-end (Petite) and high-end (Gonzo) approach • try to minimize startup complexity and cost • deploying clipper testbed facilities • Now have routers interconnected at ANL, LBNL, SLAC • developing white paper on QoS ancillary issues - “what makes QoS so complex to deploy” • Participating member in I2/Qbone projects
NEW INITIATIVESNGI • The Next Generation Internet (NGI) is a multi-year, multi-agency federal research and development program to develop, test, and demonstrate advanced networking technologies and applications. • DOE has $15M for FY99 • Several Calls for Proposal have been issued • Basic Technologies, Applications, Testbeds • ESnet is participating in several proposal submissions
NEW INITIATIVESIT2/SSI • One of the Clinton Administration's major science and technology initiatives in the FY 2000 budget request to be sent to Congress will be a $366 million, or 28%, increase, in information technology research. The program is called "Information Technology for the Twenty-First Century," or IT2. • DOE’s program is known as the SSI/SSP (Science Simulation I/P) • Proposed Agency Budget Increases: • National Science Foundation - $146 million • Department of Defense - $100 million • Department of Energy - $70 million • NASA - $38 million • NIH - $ 6 million • NOAA - $6 million • DOE’s program contains a networking component • DOE’s program calls for Tbps networking capability by ‘03 • ESnet has launched an RFP to meet such requirements
STAR TAP Status February, 1999 by Steven N. Goldstein
STAR TAP:Persistent Interconnect for NGI, Internet2, International High-Performance Networks CA*net 2 CERN France Australia Israel Japan Netherlands Denmark Korea Finland Iceland Singapore Russia Norway Taiwan Sweden Source: http://www.startap.net/topology.html
The Many “Faces” of STAR TAP* * Courtesy Paul Zawada
CA*Net 2 (Canada) 155 Mbps (http://www.canarie.ca ) vBNS (NSF/MCI) 155 Mbps (http://www.vbns.net ) DoE (ESnet) and NASA (NREN and NISN) share 155 Mbps connection TAP (http://www.es.net ) Abilene (UCAID/Internet2) (http://www.ucaid.org) SINGAREN (Singapore) 14 Mbps (http://www.singaren.net.sg ) TransPAC (35 Mbps from Tokyo--Japan, Korea, Singapore, Australia...); potential for doubling capacity in ‘99 (http://www.transpac.org ); TAnet II (Taiwan, ~15 Mbps of a 45 Mbps link) NORDUnet (backbone connects IS, NO, SE, FI, DK) expected Nov 98; ~45 Mbps will be split off from 155 Mbps to New York (http://www.nordu.net ) MirNET (6 Mbps link from Moscow) expected February ‘99; (http://www.mirnet.org ) SURFnet (Netherlands) 155 Mbps to New York, and 45 Mbps split off to STAR TAP Israel (~45 Mbps Inter-University Computation Center) delivery expected mid- '99; Renater (~45 Mbps, France) is tendering for 45 Mbps, or greater, link to the U.S., portion to STAR TAP CERN (~20 Mbps) direct to STAR TAP; expected April 99 STAR TAP CONNECTIONS Already Connected: Pending:
Internet2: update Doug Van Houweling dvh@internet2.edu ESnet International Meeting 17 February 1999
Internet2 Project Goals • Enable new generation of applications • Re-create leading edge R&E network capability • Transfer capability to the global production Internet
Progress • 2.5 years ago:Internet2 Project formedOctober 1996 • 1.5 year ago: UCAID IncorporatedOctober 1997 • 10 months ago:Abilene LaunchedApril 1998
Today • 141 universities • 47 corporations • 7 gigapops connected to vBNS • Abilene in service w/ 20 institutions connected • Abilene peering with vBNS, ESnet • QoS -- QBone initiative launched • Middleware initiative launched
UCAID Member Universities141 Members as of January 1999 University of Puerto Rico not shown
3Com Advanced Network & Services AT&T Cabletron Systems Cisco Systems FORE IBM ITC^DeltaCom Lucent Technologies MCI Worldcom Newbridge Networks Nortel Networks Qwest Communications StarBurst Communications Internet2 Corporate Partners
Bell South Packet Engines SBC Technology Resources StorageTek Torrent Technologies Internet2 Corporate Sponsors
Alcatel Telecom Ameritech Apple Computer AppliedTheory Bell Atlantic Bellcore British Telecom Compaq/DEC Deutsche Telekom Fujitsu GTE Internetworking Hitachi IXC Communications KDD Nexabit Networks Nokia Research Center Novell NTT Multimedia Pacific Bell RR Donnelley Siemens Sprint Sun Microsystems Sylvan Learning Telebeam Teleglobe Williams Communications Internet2 Corporate Members
Abilene • Fall 1998: Demonstrated network at member meeting • January 1999: Abilene in full service • peering with: vBNS, ESnet • By December 1999: around 65 institutions connected
Abilene Router Node Abilene Access Node Operational January 1999 Planned 1999 Abilene NetworkFebruary 1999 Seattle Cleveland New York Sacramento Denver Indianapolis Kansas City Los Angeles Atlanta Houston
Combination of: high bandwidth wide area intrinsically bursty applications Need for multicast Need for quality of service Need for measurements Network Engineering Challenge
I2 Interconnect Cloud Network Architecture GigaPoP One GigaPoP Two GigaPoP Three GigaPoP Four “Gigabit capacity point of presence” an aggregation point for regional connectivity
I2 Interconnect Cloud GigaPoPs, cont. University A E.g. vBNS, Abilene GigaPoP One Commodity Internet Connections Regional Network University B University C
IPv6 Measurement Multicast Network Management Network Storage Quality of Service Routing Security Topology Working Group Progress
Technical Innovation: Quality of Service • Chair: Ben Teitelbaum, Internet2 staff • Focus: Multi-network IP-based QoS • Relevant to advanced applications • Interoperability: carriers and kit • Architecture • QBone distributed testbed
Big Problem #1: Understanding Application Requirements • Range of poorly-understood needs • Both intolerant and tolerant apps important • Many apps need absolute, per-flow QoS assurances • Adaptive apps may require a minimum level of QoS, but can exploit additional network resources if available
Big Problem #2: Scalability • # flows through core >> # flows through edge • Goal: keep per-flow state out of the core • Design principles • Put “smarts” in edge routers • Allow core routers to be fast and dumb
CampusNetworks CampusNetworks GigaPoPs GigaPoPs Big Problem #3: Interoperability ... between separately administered and designed clouds ... Backbone Networks(vBNS, Abilene, …) … and between multiple implementations of network elements ... … is crucial if we are to provide end-to-end QoS.
Premium Service • Emulates a leased line • Contract: peak rate profile • PHB = “forward me first” (e.g. priority queuing, WFQ) • Policing rule = drop out-of-profile packets • On egress, clouds need to shape Premium aggregates to mask induced burstiness • GigaPoPs: • scalable access to wide-area resources • Backbones: • vBNS • Abilene
Avoiding Balkanization Physics Applications Digital Library Applications Instructional Applications Data Mining Applications Physics QoS Digital Library QoS Instructional QoS Data Mining QoS Physics Security Digital Library Security Instructional Security Data Mining Security Physics Directories Digital Library Directories Instructional Directories Data Mining Directories Physics Storage Digital Library Storage Instructional Storage Data Mining Storage TCP/IP Network
Internet2 International Collaborations • Building peer to peer relationships • Looking for similar goals/objectives and similar constituencies • Mechanism: Memoranda of Understanding • Signed: CANARIE, Stichting SURF, NORDUnet, TERENA*, SingAREN, JAIRC, UKERNA-INFN/GARR-DFN-Verein-RENATER
International collaborations cont’d • Network interconnection • Interconnection at STAR TAP (CAnetII, SURFnet, NORDUnet, SingAREN underway) • Second interconnection directly with Abilene (NY - NORDUnet, SURFnet) • Specific project collaboration • QBone (e.g. SURFnet an initial participant) • middleware • research/learning applications
Next Steps • Continue to interconnect member desktops and servers at high speed • Continue to support advanced applications development • focus on multi-campus implementations • Adopt, develop and implement • QoS end-to-end • middleware end-to-end • new business models
Next Generation Internet Status of the Federal Program with a focus on the DOE Program Contribution George Seweryniak Office of Science U.S. Department of Energy 18 February 1999
WHITE HOUSE Executive Office of the President Office of Science and Technology Policy Presidential Advisory Committee on High Performance Computing and Communications, Information Technology, and the Next Generation Internet National Science and Technology Council Executive Committee Committee on Technology Subcommittee on Computing, Information, and Communications R&D National Coordination Office (NCO) for Computing, Information, and Communications Human Centered Systems Working Group (HuCS) Education, Training, and Human Resources Working Group (ETHR) Federal Information Services Applications Council (FISAC) High Confidence Systems Working Group (HCS) High End Computing and Computation Working Group (HECC) Large Scale Networking Working Group (LSN) Next Generation Internet Implementation Team
What is NGI? • “The goal of the NGI is to conduct R&D in advanced networking technologies, to demonstrate those technologies in testbeds that are 100 to 1,000 times faster than today’s Internet [~ 1 megabit/sec], and to develop and demonstrate on those testbeds revolutionary applications that meet important national needs and that cannot be achieved with today’s Internet.” • In essence “to let our researchers live in the future.”
Wide Area, Data Intensive andCollaborative Computing This is an End-to-End Problem Many different types of objects need to be connected to and coordinated by the networks Nationwide Networks Scientists Desktops Terascale Simulations Scientific Instruments Petabyte/year Experimental Facilities Shared Virtual Environments
Global Systems Combustion Basic Science Computer Science & Enabling Tech. Computing and Communications Facilities Science Simulation Program Structure
Program Elements • Algorithms, models, methods and libraries; • meshing, scalable algorithms, predictability, and scientific data exploration • Problem solving environments and tools; • scalable tools, software components, programming models, PSE abstractions • Distributed computing and collaboration technology; and • remote data access, multi resource scheduler, distributed software development • Visualization and data management systems. • Multi resolution methods, feature driven query, quantitative methods, new metaphors for complex data
SSP Builds on Strong SC Base Program Disciplinary Science and Engineering Advanced Computing Research External to SC Underlying Technologies • Hardware • Software • Networking FundamentalResearch • Applied Mathematics • Computer Science • Networking R&D forApplications • Advanced Computing Software Tools • Collaboratory Tools • Scientific Application Pilots • Collaboratory Pilots ScientificComputing • Applications• Global Climate Systems • Combustion Modeling • Materials Design • Subsurface Transport • Data Management for High-Energy Physics, Nuclear Physics, Structural Biology, ... Testbeds National Energy Research Scientific Computing Center (NERSC) Advanced Computing Research Facilities Facilities Energy Sciences Network (ESnet) Advanced Networks* Activities supported by MICS shown in blue.* Part of NGI activities. MICS Activities Version 3.0
Primary Technical Challenges for the SSP Usable and Affordable O(1014) FLOPS/s Computer Systems • Multiple paths to 40TF vendor integrated, laboratory integrated, and new technologies • improve peak performance • improve fraction of peak performance sustained • improve price/performance • Commercially Vs. PC Clusters • $20/Mflop Vs $4/Mflop in 2000 • $5/Mflop Vs $1/Mflop in 2005 • New technologies (PIM, HTMT) • Augmented Nodes (in discussion phase with IBM) DOE labs currently investigating all options
Presentations from European participants in Esnet mtg • All European participants presented status of their national networks and their international connectivity • I have picked two presentations (Italy and Germany) to illustrate some points. • Common themes are Gigabit testbeds and plans, QoS/Differentiated services • Idea of “justified use” vs. “general Internet”
GARR-B Status & Plans Esnet International Meeting Paris 17-18 February 1999 Federico Ruggieri CNAF - Bologna