240 likes | 257 Views
Learn about DataTAG's mission, funding agencies, and project information aimed at advancing grid networking and interoperability across EU and US domains. Explore key achievements and collaborations.
E N D
The DataTAG Project Internet2 Spring Member meeting 8 April 2003, Arlington, USA Olivier H. Martin / CERN http://www.datatag.org
DataTAG Mission TransAtlantic Grid • EU US Grid network research • High Performance Transport protocols • Inter-domain QoS • Advance bandwidth reservation • EU US Grid Interoperability • Sister project to EU DataGRID The DataTAG Project
Funding agencies Cooperating Networks The DataTAG Project
Brunel University CERN CLRC CNAF DANTE INFN INRIA NIKHEF PPARC UvA University of Manchester University of Padova University of Milano University of Torino UCL EU collaborators The DataTAG Project
ANL Caltech Fermilab FSU Globus Indiana Wisconsin Northwestern University UIC University of Chicago University of Michigan SLAC Starlight US collaborators The DataTAG Project
Project information • Two years project started on 1/1/2002 • extension until 1Q04 under consideration • 3.9 MEUROs • 50% Circuit cost, hardware • Manpower • WP1: • Establishment of a high performance intercontinental Grid testbed (CERN) • WP2: • High performance networking (PPARC) • WP3 • Bulk data transfer validations and application performance monitoring (UvA) • WP4 • Interoperability between Grid domains (INFN) • WP5 & WP6 • Dissemination and project management (CERN) The DataTAG Project
GriPhyN PPDG iVDGL InteroperabilityFramework EU Part US Part DataTAG-WP4 iVDGL HICB GLUE DataGRID Griphyn/PPDG HEP experiments and LCG LCG middleware selection The DataTAG Project
The WorldGRID transatlantic testbed A successful example of Grid interoperability across EU and US domains Flavia Donno (Former DataTAG WP4, LCG) Flavia.Donno@cern.ch http://chep03.ucsd.edu/files/249.ppt DataTag is a project funded by the European Union CHEP 2003 – 24-28 March – no. (1)
UI VDT Client RC SE RC RB IS IS CE VDT Server Solutions • Different Grid Architectures (VDT server/client vs. Computing Elements, Storage Elements, User Interfaces, Ressource Broker, Replica Catalog,…)
UI VDT Client RC SE RB IS CE VDT Server Final Architecture
Evolution of the testbed • 2.5G circuit in operation since August 20, 2002 • On request from the partners, the testbed evolved from a simple layer3 testbed into an extremely rich, most probably unique, multi-vendor layer2 & layer 3 testbed • Alcatel, Cisco, Juniper • Direct extensions to Amsterdam (UvA)/Surfnet (10G) & Lyon (INRIA)/VTHD (2.5G) • VPN layer 2 extension to INFN/CNAF over GEANT & GARR using Juniper’s MPLS • In order to guarantee exclusive access to the testbed a reservation application has been developed • Proved to be essential The DataTAG Project
UK SuperJANET4 NL ATRIUM/VTHD FR SURFnet INRIA GEANT IT GARR-B DataTAG connectivity NewYork Abilene 3*2.5G VPN Layer 2 STAR-LIGHT ESNET CERN 2.5G --> 10G 10G MREN STAR-TAP Major 2.5/10 Gbps circuits between Europe & USA The DataTAG Project
Multi vendor layer 2/3 testbed STARLIGHT (Chicago) INFN (Bologna) CERN (Geneva) Abilene Canarie ESnet INRIA (Lyon) GEANT Surfnet 2.5Gbps 10Gbps 10Gbps Juniper Juniper Wave triangle Extreme Summit5i M M Alcatel 2.5Gbps Alcatel GBE GBE Cisco Cisco M=A1670 (Layer 2over SDH Mux) The DataTAG Project
Phase I (iGRID2002)Layer2 The DataTAG Project
Phase II Generic Layer 3 configuration (Oct. 2002 – Feb. 2003) StarLight Servers CERN Servers GigE switch GigE switch 2.5Gbps C7606 C7606 The DataTAG Project
Phase III Layer2/3 (March 2003) Layer 3 INRIA Layer 2 Layer 1 VTHD Routers Servers GigE switch A1670 Multiplexer GigE switch A7770 2.5G 2*GigE C7606 To STARLIGHT 8*GigE CERN J-M10 C-ONS15454 10G UvA GEANT From CERN Servers Ditto 2.5G Abilene GARR ESNet Canarie STARLIGHT The DataTAG Project INFN/CNAF
Phase IV (September 2003?) Layer 3 INRIA Layer 2 Layer 1 VTHD Routers Servers 10GigE switch Multiplexer GigE switch A7770 10G 10GigE C7606 To STARLIGHT 8*GigE CERN J-M10 C-ONS15454 10G UvA GEANT From CERN Servers Ditto 2.5G Abilene GARR ESNet Canarie STARLIGHT The DataTAG Project INFN/CNAF
Main achievements • GLUE Interoperability effort with DataGrid, iVDGL & Globus • GLUE testbed & demos • VOMS design and implementation in collaboration with DataGrid • VOMS evaluation within iVDGL underway • Integration of GLUE compliant components in DataGrid and VDT middleware • Internet landspeed records have been beaten one after the other by DataTAG project members and/or teams closely associated with DataTAG: • Atlas Canada lightpath experiment (iGRID2002) • New Internet2 landspeed record (I2 LSR) by Nikhef/Caltech team (SC2002) • Scalable TCP, HSTCP, GridDT & FAST experiments (DataTAG partners & Caltech) • Intel 10GigE tests between CERN (Geneva) and SLAC (Sunnyvale) – (Caltech, CERN, Los Alamos NL, SLAC) • 2.38Gbps sustained rate, single TCP/IP flow, 1TB in one hour (S. Ravot/Caltech) The DataTAG Project
10GigE Data Transfer Trial On Feb. 27-28, a terabyte of data was transferred in 3700 seconds by S. Ravot of Caltech between the Level3 PoP in Sunnyvale near SLAC and CERN through the TeraGrid router at StarLight from memory to memory with a single TCP/IP stream.This achievement translates to an average rate of 2.38 Gbps(using large windows and 9kB “jumbo frames”). This beat the former record by a factor of ~2.5 and used the US-CERN link at 99% efficiency. European Commission
Conclusions • TCP/IP performance issues in long distance high speed networks have been known for very many years. • What is new, however, is the widespread availability of 10Gbps A&R backbones as well as the emergence of 10GigE technology. • Thus, the awareness that the problem requires quick resolution has been growing rapidly during the last 2 years, hence the flurry of proposals. • HSTCP, Scalable TCP, FAST, Grid DT, XCP,… • Hard to predict which one will win, but simplicity and ease of deployment is definitely key to success! The DataTAG Project