1 / 85

Challenges and Innovations in High-Energy Physics Networks for Global Collaborations

Explore the challenges and advancements in managing vast amounts of data for international scientific collaborations in high-energy physics, nuclear physics, and astrophysics. Learn about the Petabyte to Exabyte transition, Large Hadron Collider experiments, and global networking strategies.

soniar
Download Presentation

Challenges and Innovations in High-Energy Physics Networks for Global Collaborations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HENP Networks and Grids for Global Virtual Organizations Harvey B. Newman California Institute of TechnologyTIP2004, Internet2 HENP WG SessionJanuary 25, 2004

  2. The Challenges of Next Generation Science in the Information Age • Flagship Applications • High Energy & Nuclear Physics, AstroPhysics Sky Surveys: TByte to PByte “block” transfers at 1-10+ Gbps • eVLBI: Many real time data streams at 1-10 Gbps • BioInformatics, Clinical Imaging: GByte images on demand • HEP Data Example: • From Petabytes in 2003, ~100 Petabytes by 2007-8, to ~1 Exabyte by ~2013-5. • Provide results with rapid turnaround, coordinating large but limited computing and data handling resources,over networks of varying capability in different world regions • Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs • With reliable, quantifiable high performance Petabytes of complex data explored and analyzed by 1000s of globally dispersed scientists, in hundreds of teams

  3. Large Hadron Collider (LHC) CERN, Geneva: 2007 Start • pp s =14 TeV L=1034 cm-2 s-1 • 27 km Tunnel in Switzerland & France CMS TOTEM pp, general purpose; HI First Beams: April 2007 Physics Runs: from Summer 2007 ALICE : HI LHCb: B-physics ATLAS Atlas

  4. Four LHC Experiments: The Petabyte to Exabyte Challenge • ATLAS, CMS, ALICE, LHCBHiggs + New particles; Quark-Gluon Plasma; CP Violation 6000+ Physicists & Engineers; 60+ Countries; 250 Institutions Tens of PB 2008; To 1 EB by ~2015 Hundreds of TFlops To PetaFlops

  5. LHC: Higgs Decay into 4 muons (Tracker only); 1000X LEP Data Rate 109 events/sec, selectivity: 1 in 1013 (1 person in a thousand world populations)

  6. Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center LHC Data Grid Hierarchy:Developed at Caltech CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1 ~PByte/sec ~100-1500 MBytes/sec Online System Experiment CERN Center PBs of Disk; Tape Robot Tier 0 +1 Tier 1 ~10 Gbps FNAL Center IN2P3 Center INFN Center RAL Center 2.5-10 Gbps Tier 2 ~2.5-10 Gbps Tier 3 Institute Institute Institute Institute Tens of Petabytes by 2007-8.An Exabyte ~5-7 Years later. Physics data cache 0.1 to 10 Gbps Tier 4 Workstations Emerging Vision: A Richly Structured, Global Dynamic System

  7. Bandwidth Growth of Int’l HENP Networks (US-CERN Example) • Rate of Progress >> Moore’s Law. (US-CERN Example) • 9.6 kbps Analog (1985) • 64-256 kbps Digital (1989 - 1994) [X 7 – 27] • 1.5 Mbps Shared (1990-3; IBM) [X 160] • 2 -4 Mbps (1996-1998) [X 200-400] • 12-20 Mbps (1999-2000) [X 1.2k-2k] • 155-310 Mbps (2001-2) [X 16k – 32k] • 622 Mbps (2002-3) [X 65k] • 2.5 Gbps  (2003-4) [X 250k] • 10 Gbps  (2005) [X 1M] • A factor of ~1M over a period of 1985-2005 (a factor of ~5k during 1995-2005) • HENP has become a leading applications driver, and also a co-developer of global networks;

  8. HEP is Learning How to Use Gbps Networks Fully: Factor of ~50 Gain in Max. Sustained TCP Thruput in 2 Years, On Some US+Transoceanic Routes • 9/01 105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN • 5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams • 6/1/02 290 Mbps Chicago-CERN One Stream on OC12 • 9/02 850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, 2.5G Link • 11/02 [LSR] 930 Mbps in 1 Stream California-CERN, and California-AMS FAST TCP 9.4 Gbps in 10 Flows California-Chicago • 2/03 [LSR] 2.38 Gbps in 1 Stream California-Geneva (99% Link Utilization) • 5/03 [LSR] 0.94 Gbps IPv6 in 1 Stream Chicago- Geneva • TW & SC2003: 5.65 Gbps (IPv4), 4.0 Gbps (IPv6) in 1 Stream Over 11,000 km *

  9. FAST TCP:Baltimore/Sunnyvale • Fast convergence to equilibrium • RTT estimation: fine-grain timer • Delay monitoring in equilibrium • Pacing: reducing burstiness 88% 10G 90% 9G Measurements 11/02 • Std Packet Size • Utilization averaged over > 1hr • 4000 km Path 90% Average utilization 92% 8.6 Gbps; 21.6 TB in 6 Hours 95% Fair SharingFast Recovery 1 flow 2 flows 7 flows 9 flows 10 flows

  10. Fall 2003: Transatlantic Ultraspeed TCP TranfersThroughput Achieved: X50 in 2 years • Terabyte Transfers by the Caltech-CERN Team: • Nov 18: 4.00 Gbps IPv6 Geneva-Phoenix (11.5 kkm) • Oct 15: 5.64 Gbps IPv4 Palexpo-L.A. (10.9 kkm) • Across Abilene (Internet2) Chicago-LA, Sharing with normal network traffic • Peaceful Coexistence with a Joint Internet2- Telecom World VRVS Videoconference Juniper, HPLevel(3)Telehouse Nov 19: 23+ Gbps TCP: Caltech, SLAC, CERN, LANL, UvA, Manchester

  11. HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;We are Rapidly Learning to Use Multi-Gbps Networks Dynamically

  12. HENP Lambda Grids:Fibers for Physics • Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores • Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007)requires that each transaction be completed in a relatively short time. • Example: Take 800 secs to complete the transaction. Then Transaction Size (TB)Net Throughput (Gbps) 1 10 10 100 100 1000 (Capacity of Fiber Today) • Summary: Providing Switching of 10 Gbps wavelengthswithin ~2-4 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”,to fully realize the discovery potential of major HENP programs, as well as other data-intensive research.

  13. National Light Rail Footprint SEA POR SAC BOS NYC CHI OGD DEN SVL CLE WDC PIT FRE KAN RAL NAS STR LAX PHO WAL ATL SDG OLG DAL JAC 15808 Terminal, Regen or OADM site Fiber route • NLR • Starting Up Now • Initially 4 10 Gb Wavelengths • Future: to 40 10Gb Waves • Transition beginning now to optical, multi-wavelength R&E networks. • Also Note: XWIN (Germany); IEEAF/GEO plan for dark fiber in Europe

  14. GLIF network 1Q2004:“Global Lambda Integrated Facility” Stockholm NorthernLight New York MANLAN 10 Gbit/s IEEAF 10 Gbit/s 2.5 Gbit/s 10 Gbit/s 2x10 Gbit/s 2.5 Gbit/s SURFnet 10 Gbit/s Tokyo WIDE Chicago Amsterdam Dwingeloo ASTRON/JIVE IEEAF 10 Gbit/s 2x10 Gbit/s DWDM SURFnet NSF 10 Gbit/s 10 Gbit/s (2/29 ?) 10 Gbit/s 10 Gbit/s 2.5 Gbit/s SURFnet 10 Gbit/s Tokyo APAN 2.5 Gbit/s (2/29 ) London UKLight Geneva CERN Prague CzechLight lambda service path IP service path

  15. Aarnet: SXTransport Project in 2004 • Connect Major Australian Universities to 10 Gbps Backbone • Two 10 Gbps Research Links to the US • Aarnet/USLIC Collaboration on Net R&D Starting Now

  16. GLORIAD: Global Optical Ring (US-Ru-Cn) “Little Gloriad” (OC3) Launched January 12; to OC192 in 2004

  17. Germany: 2003, 2004, 2005 • GWIN Connects 550 Universities, Labs, Other Institutions GWIN: Q4/04 Plan XWIN: Q4/05(Dark Fiber Option) GWIN: Q4/03

  18. Classical, HENP Data Grids, and Now Service-Oriented Grids • The original Computational and Data Grid concepts are largely stateless, open systems: known to be scalable • Analogous to the Web • The classical Grid architecture has a number of implicit assumptions • The ability to locate and schedule suitable resources, within a tolerably short time (i.e. resource richness) • Short transactions with relatively simple failure modes • HENP Grids are Data Intensive & Resource-Constrained • 1000s of users competing for resources at 100s of sites • Resource usage governed by local and global policies • Long transactions; some long queues • HENP Stateful, End-to-end Monitored and Tracked Paradigm • Adopted in OGSA, Now WS Resource Framework

  19. The Grid Analysis Environment (GAE) • The GAE: key to “success” or “failure” for physics & Grids in the LHC era: • 100s - 1000s of tasks, with a wide range of computing, data and network resource requirements, and priorities

  20. The Move to OGSA and then Managed Integration Systems App-specific Services ~Integrated Systems Stateful; Managed Open Grid Services Arch Web ServResrc Framwk Web services + … Increased functionality, standardization GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit Globus Toolkit X.509, LDAP, FTP, … Defacto standards GGF: GridFTP, GSI Custom solutions Time

  21. Managing Global Systems: Dynamic Scalable Services Architecture MonALISA: http://monalisa.cacr.caltech.edu

  22. Lookup Discovery Service Lookup Service Service Listener Lookup Service Remote Notification Registration Station Server Station Server Station Server Proxy Exchange Dynamic Distributed Services Architecture (DDSA) • “Station Server” Services-engines at sites host “Dynamic Services” • Auto-discovering, Collaborative • Scalable to thousands of service-Instances • Servers interconnect dynamically; form a robust fabric • Service Agents: Goal-Oriented, Autonomous, Adaptive • Adaptable to Web services: many platforms & working environments (also mobile) See http://monalisa.cacr.caltech.edu http://diamonds.cacr.caltech.edu Caltech/UPB (Romania)/NUST (Pakistan) Collaboration

  23. GAE Architecture Analysis Client Analysis Client Analysis Client HTTP, SOAP, XML/RPC Grid Services Web Server Scheduler Catalogs Fully- Abstract Planner Metadata Partially- Abstract Planner Virtual Data Applications Data Management Monitoring Replica Fully- Concrete Planner Grid Execution Priority Manager Grid Wide Execution Service • Analysis Clients talk standard protocols to the “Grid Services Web Server”, a.k.a. the Clarens data/services portal. • Simple Web service API allows Analysis Clients (simple or complex) to operate in this architecture. • Typical clients: ROOT, Web Browser, IGUANA, COJAC • The Clarens portal hides the complexity of the Grid Services from the client, but can expose it in as much detail as req’d for e.g. monitoring. • Key features: Global Scheduler, Catalogs, Monitoring, and Grid-wide Execution service.

  24. GAE Architecture “Structured Peer-to-Peer” • The GAE, based on Clarens and Web services, easily allows a “Peer-to-Peer” configuration to be built, with associated robustness and scalability features. • Flexible: allows easy creation, use and management of highly complex VO structures. • A typical Peer-to-Peer scheme would involve the Clarens servers acting as “Global Peers,” that broker GAE client requests among all the Clarens servers available worldwide.

  25. SEA POR SAC NYC CHI OGD DEN SVL CLE PIT WDC FRE KAN RAL NAS National Lambda Rail STR PHO LAX WAL ATL SDG OLG DAL JAC UltraLight Collaboration:http://ultralight.caltech.edu • Caltech, UF, FIU, UMich, SLAC,FNAL,MIT/Haystack,CERN, UERJ(Rio), NLR, CENIC, UCAID,Translight, UKLight, Netherlight, UvA, UCLondon, KEK, Taiwan • Cisco, Level(3) • Integrated hybrid experimental network, leveraging Transatlantic R&D network partnerships; packet-switched + dynamic optical paths • 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight, NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil • End-to-end monitoring; Realtime tracking and optimization; Dynamic bandwidth provisioning • Agent-based services spanning all layers of the system, from the optical cross-connects to the applications.

  26. ICFA Standing Committee on Interregional Connectivity (SCIC) • Created by ICFA in July 1998 in Vancouver ; Following ICFA-NTF • CHARGE: • Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe (and network requirements of HENP) • As part of the process of developing theserecommendations, the committee should • Monitor traffic • Keep track of technology developments • Periodically review forecasts of future bandwidth needs, and • Provide early warning of potential problems • Create subcommittees when necessary to meet the charge • The chair of the committee should report to ICFA once peryear, at its joint meeting with laboratory directors (Today) • Representatives: Major labs, ECFA, ACFA, NA Users, S. America

  27. ICFA SCIC in 2002-2004A Period of Intense Activity • Strong Focus on the Digital Divide Continuing Five Reports; Presented to ICFA 2003See http://cern.ch/icfa-scic • Main Report: “Networking for HENP” [H. Newman et al.] • Monitoring WG Report [L. Cottrell] • Advanced Technologies WG Report [R. Hughes-Jones, O. Martin et al.] • Digital Divide Report [A. Santoro et al.] • Digital Divide in Russia Report [V. Ilyin] 2004 Reports in Progress; Short Reports on Nat’l and Regional Network Infrastructures and Initiatives.Presentation to ICFA February 13, 2004

  28. SCIC Report 2003 General Conclusions • The scale and capability of networks, their pervasiveness and range of applications in everyday life, and HENP’s dependence on networks for its research, are all increasing rapidly. • However, as the pace of network advances continues to accelerate, the gap between the economically “favored” regions and the rest of the world is in danger of widening. • We must therefore workto Close the Digital Divide • To make Physicists from All World Regions Full Partners in Their Experiments; and in the Process of Discovery • This is essential for the health of our global experimental collaborations, our plans for future projects, and our field.

  29. Work on the Digital Divide:Several Perspectives • Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … • Share Information: Comparative Performance and Pricing • Find Ways to work with vendors, NRENs, and/or Gov’ts • Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic • Inter-Regional Projects • South America: CHEPREO (US-Brazil); EU @LIS Project • GLORIAD, Russia-China-US Optical Ring • Virtual SILK Highway Project (DESY): FSU satellite links • Help with Modernizing the Infrastructure • Design, Commissioning, Development • Provide Tools for Effective Use: Monitoring, Collaboration • Workshops and Tutorials/Training Sessions • For Example: Digital Divide and HEPGrid Workshop,UERJ Rio, February 2004 • Participate in Standards Development; Open Tools • Advanced TCP stacks; Grid systems

  30. ICTP 2nd Open Round Table on Developing Countries Access to Scientific Information STATEMENT: AFFORDABLE ACCESS TO THE INTERNET FOR RESEARCH AND LEARNING “Scholars from across the world meeting at the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste [10/2003] were concerned to learn of the barrier to education and research caused by the high cost of Internet access in many countries. The Internet enables the use of content which is vital for individuals and for institutions engaged in teaching, learning and research. In many countries use of the Internet is severely restricted by the high telecommunications cost, leading to inequality in realising the benefits of education and research. Research staff and students in countries with liberal telecommunications policies favouring educational use are gaining social and economic advantage over countries with restrictive, high-cost policies. The potential benefits of access to the Internet are not available to all. The signatories to this message invite scholars in every country to join them in expressing concern to governments and research funding agencies at the effect of high telecommunications costs upon individuals and institutions undertaking teaching, learning and research. The situation in many countries could be improved through educational discounts on normal telecommunications costs, or through the lifting of monopolies. It is for each country to determine its own telecommunications policies but the need for low-cost access to the Internet for educational purposes is a need which is common to the whole of humankind.”

  31. History - Throughput Quality Improvements from US Progress: but the Digital Divide is Being Maintained 1.5-8 Year Lag 60% annual improvement Factor ~100/10 yr • S.E. Europe, Russia: Catching Up • Lat Am., MidEast, China, Africa: Keeping Up • India: Falling Behind Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) (1) Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), July 1997

  32. Current State – June 2003 (Max. Throughput in Mbps) • Within region performance improving: • E.g. Ca/US-NA, Hu-SE Eu, Eu-Eu, Jp-E Asia, Au-Au, Ru-Ru • Africa, Caucasus, Central & S. Asia all bad Bad < 200kbits/s < DSL Acceptable > 200, < 1000kbits/s Good > 1000kbits/s

  33. DAI: State of the World

  34. Digital Access Index Top Ten

  35. DAI: State of the World

  36. Brazil: RNP in Early 2004

  37. Dai Davies SERENATE Workshop Feb. 2003

  38. Virtual Silk Highway:The Silk Countries

  39. Virtual SILK HighwayArchitectural Overview • Hub Earth Station at DESY with access to the European NRENs and the Internet via GEANT • Providing International Internet access directly • National Earth Station at each Partner site • Operated by DESY, providing international access • Additional earth stations from other sources – none yet • SCPC up-link, common down-link, using DVB • Routers for each Partner site • Linked on one side to the Satellite Channel • On the other side to the NREN

  40. From To MHz DVB Mbps SCPC Mbps $K 08/02 11/02 2.9 3.1 0.77 20 12/02 05/03 5.4 6.9 2.40 92 06/03 11/03 7.5 9.5 3.32 136 12/03 05/04 9.4 12 4.10 175 06/04 11/04 12 16 4.90 220 12/04 07/05 15 19 6.50 379 1022 Bandwidth Plan – as of 3/03

  41. Progress in Slovakia 2002-2004(January 2004)

  42. KEK (JP) VRVS (Version 3) Meeting in 8 Time Zones VRVS on Windows Caltech (US) RAL (UK) Brazil CERN (CH) AMPATH (US) Pakistan SLAC (US) Canada 80+ Reflectors 24.5k hosts worldwide Users in 99 Countries AMPATH (US)

  43. Study into European Research and Education Networking as Targeted by eEurope www.serenate.org SERENATE is the name of a series of strategic studies into the future of research and education networking in Europe, addressing the local (campus networks), national (national research & education networks), European and intercontinental levels. The SERENATE studies bring together the research and education networks of Europe, national governments and funding bodies, the European Commission, traditional and "alternative" network operators, equipment manufacturers, and the scientific and education community as the users of networks and services. Summary and Conclusions by D.O. Williams, CERN 

  44. Optics and Fibres[Message to NRENs; or Nat’l Initiatives] • If there is one single technical lesson from SERENATE it is that transmission is moving from the electrical domain to optical. • The more you look at underlying costs the more you see the need for users to get access to fibre. • When there’s good competition users can still lease traditional communications services (bandwidth) on an annual basis. • But: Without enough competition prices go through the roof. • A significant “divide” exists inside Europe – with the worst countries [Macedonia, B-H, Albania, etc.] 1000s of times worse off than the best. Also many of the 10 new EU members are ~5X worse off than the 15 present members. • Our best advice has to be “if you’re in a mess, you must get access to fibre”. • Also try to lobby politicians to introduce real competition. • In Serbia – still a full telecoms monopoly – the two ministers talked and the research community was given a fibre pair all around Serbia !

  45. HEPGRID and Digital Divide Workshop UERJ, Rio de Janeiro, Feb. 16-20 2004 Theme: Global Collaborations, Grids and Their Relationship to the Digital Divide ICFA, understanding the vital role of these issues for our field’s future, commissioned the Standing Committee on Inter-regional Connectivity (SCIC) in 1998, to survey and monitor the state of the networks used by our field, and identify problems. For the past three years the SCIC has focused on understanding and seeking the means of reducing or eliminating the Digital Divide, and proposed in ICFA that these issues, as they affect our field of High Energy Physics, be brought to our community for discussion. This led to ICFA’s approval, in July 2003, of the Digital Divide and HEP Grid Workshop.  More Information: http://www.uerj.br/lishep2004 NEWS:Bulletin: ONE TWOWELCOME BULLETIN General InformationRegistrationTravel InformationHotel Registration Participant List How to Get UERJ/HotelComputer Accounts Useful Phone Numbers Program Contact us: SecretariatChairmen SPONSORS CLAF  CNPQ  FAPERJ        UERJ

  46. Networks, Grids and HENP • Network backbones and major links used by HENP experiments are advancing rapidly • To the 10 G range in < 2 years; much faster than Moore’s Law • Continuing a trend: a factor ~1000 improvement per decade; a new DOE and HENP Roadmap • Transition to a community-owned and operated infrastructure for research and education is beginning with (NLR, USAWaves) • HENP is learning to use long distance 10 Gbps networks effectively • 2002-2003 Developments: to 5+ Gbps flows over 11,000 km • Removing Regional, Last Mile, Local Bottlenecks and Compromises in Network Quality are nowOn the critical path, in all world regions • Digital Divide: Network improvements are especially neededin SE Europe, So. America; SE Asia, and Africa • Work in Concert with Internet2, Terena, APAN, AMPATH; DataTAG, the Grid projects and the Global Grid Forum

  47. Some Extra Slides Follow Computing Model Progress CMS Internal Review of Software and Computing

More Related