1 / 17

Understanding Matter: The Large Hadron Collider (LHC) and Grid Computing

Explore the structure of matter with the Large Hadron Collider (LHC) at CERN. Discover the collaborative international effort and advanced networking and computing systems involved in this groundbreaking research.

emontgomery
Download Presentation

Understanding Matter: The Large Hadron Collider (LHC) and Grid Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHC, Networking and GridsDavid FosterNetworks and Communications Systems Group Headdavid.foster@cern.chAPAN 2004, CAIRNS David Foster CERN IT-CS

  2. CERN : Facts Geneva could be contained within the LHC ring. (Large Hadron Collider) • Primary Objective: • Understand the structure of matter • Instruments : • Accelerators and Detectors • Le CERN : • European Organisation • 20 member states • Founded in 1954 by 12 countries • Real example of international collaboration • World Lab • The CERN site: • > 60 Km2 • Spans the Swiss/Freench border David Foster CERN IT-CS

  3. CERN site:Next to Lake Geneva Mont Blanc, 4810 m Downtown Geneva David Foster CERN IT-CS

  4. LHC Accelerator • LHC : • 27 Km • Depth varies from 50 to 175 m • Energy :450 GeV to 7 teV • >1200 superconducting magnets, max 8,36 Teslas ! • 24 Km of cryostats at 1,9 °K • 100T Liquid Helium Recycled daily • 60T Liquid Nitrogen daily David Foster CERN IT-CS

  5. 40 MHz (40 TB/sec) level 1 - special hardware 75 KHz (75 GB/sec) level 2 - embedded processors 5 KHz (5 GB/sec) level 3 - PCs 100 Hz (100 MB/sec) data recording & offline analysis Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Concorde (15 Km) ~15 PetaBytes of data each year Analysis will need the computing power of ~ 100,000 of today's fastest PC processors! Mt. Blanc (4.8 Km) David Foster CERN IT-CS

  6. The Large Hadron Collider (LHC) has 4 Detectors: CMS ATLAS Requirements for world –wide data analysis: Storage – Raw recording rate 0.1 – 1 GB/s Accumulating data at 5-8 Petabytes/year (plus copies) LHCb 10 Petabytes of disk Processing – 100,000 of today’s fastest processors David Foster CERN IT-CS

  7. Main Internet connections at CERN Mission Oriented & World Health Org. IN2P3 Swiss National Research Network General purpose A&R and commodity Internet connections (Europe/USA/World) WHO 1/10Gbps SWITCH 45Mbps Europe 1Gbps GEANT (2.5/10Gbps) 1Gbps USA USLIC 10Gbps CERN CIXP 1Gbps Commercial 10Gbps NetherLight 2.5Gbps From ~25G (2003) To ~40G (2004) ATRIUM/VTHD Network Research David Foster CERN IT-CS

  8. Telecom Operators & dark fibre providers: Cablecom, COLT, France Telecom, FibreLac/Intelcom, Global Crossing, LDCom, Deutsche Telekom/T-Systems, Interoute(*), KPN, MCI/Worldcom, SIG, Sunrise, Swisscom (Switzerland), Swisscom (France), Thermelec, VTX. Internet Service Providers include: Infonet, AT&T Global Network Services, Cablecom, Callahan, Colt, DFI, Deckpoint, Deutsche Telekom, Easynet, FibreLac, France Telecom/OpenTransit, Global-One, InterNeXt, IS-Productions, LDcom, Nexlink, PSI Networks (IProlink), MCI/Worldcom, Petrel, SIG, Sunrise, IP-Plus,VTX/Smartphone, UUnet, Vianetworks. Others: SWITCH, Swiss Confederation, Conseil General de Haute Savoie (*) CERN’s Distributed Internet Exchange Point (CIXP) isp isp Telecom operators c i x p isp isp isp isp isp isp CERN firewall Telecom operators Cern LAN David Foster CERN IT-CS

  9. Virtual Computing Centre • The resources --- • spread throughout the world at collaborating centers • made available through grid technologies • The user --- • sees the image of a single cluster of cpu and disk • does not need to know - where the data is • - where the processing capacity is • - how things are interconnected • - the details of the different hardware • and is not concerned by the local policies of the equipment owners and managers David Foster CERN IT-CS

  10. The virtual LHC Computing Centre Grid Collaborating Computer Centres ATLAS VO CMS VO David Foster CERN IT-CS

  11. Deploying the LHC Grid Lab m Uni x grid for a regional group CERN Tier 1 Uni a UK USA Lab a France Tier 1 Tier3 physics department Uni n Tier2 Japan Italy CERN Tier 0 Desktop Lab b Germany Taipei? Lab c  Uni y Uni b grid for a physics study group   The LHC Computing Centre les.robertson@cern.ch David Foster CERN IT-CS

  12. The Goal of the LHC Computing Grid Project (LCG) • To help the experiments’ computing projects prepare, build and operate the computing environment needed to manage and analyse the data coming from the detectors • Phase 1 – 2002-05prepare and deploy a prototype of the environment for LHC computing • Phase 2 – 2006-08acquire, build and operate the LHC computing service David Foster CERN IT-CS matthias.kasemann@fnal.gov

  13. Modes of Use • Connectivity requirements are subdivided by usage pattern: • “Buffered real-time” for the T0 to T1 raw data transfer. • “Peer Services” between the T1-T1 and T1-T2 for the background distribution of data products. • “Chaotic” • submission of analysis jobs to T1 and T2 centers • “on-demand” data transfer. David Foster CERN IT-CS

  14. T0 – T1 Buffered Real Time Estimates David Foster CERN IT-CS

  15. Peer Services • Will be largely bulk data transfers. • Scheduled data “redisribution” • Need a very good, reliable, efficient file transfer service. • Much work going on with GridFTP • Maybe a candidate for non-IP service (fiberchannel over SONET) • Could be provided by a switched infrastructure. • Circuit based optical switching, on demand or static. • “Well known” and “Trusted” peer end points (hardware and software) and opportunity to bypass firewall issues. David Foster CERN IT-CS

  16. Some Challenges • Real bandwidth estimates given the chaotic nature of the requirements. • End-end performance given the whole chain involved • (disk-bus-memory-bus-network-bus-memory-bus-disk) • Provisioning over complex network infrastructures (GEANT, NREN’s etc) • Cost model for options (packet+SLA’s, circuit switched etc) • Consistent Performance (dealing with firewalls) • Merging leading edge research with production networking David Foster CERN IT-CS

  17. Thank You! David Foster CERN IT-CS

More Related