330 likes | 447 Views
PIC the Spanish LHC Tier-1 ready for data taking. EGEE09, Barcelona 21-Sep-2008 Gonzalo Merino, merino@pic.es http://lhcatpic.blogspot.com. The LHC. World’s largest scientific machine 27 km ring proton accelerators 100 m underground superconducting magnets -271 o C. Basic Science
E N D
PICthe Spanish LHC Tier-1ready for data taking EGEE09, Barcelona 21-Sep-2008 Gonzalo Merino, merino@pic.es http://lhcatpic.blogspot.com
The LHC World’s largest scientific machine 27 km ring proton accelerators 100 m underground superconducting magnets -271 oC Basic Science Try to answer fundamental questions: What are things made of? How did it look like 1 nanosecond after Big Bang?
Four detectors located in the p-p collision points Extremely complex devices O(1000) people collaborations >100 million sensors Generate Petabytes/s data
WLCG The LHC Computing Grid project started in 2002 • Phase 1 (2002 – 2005): Tests and dev, build service prototype • Phase 2 (2006 – 2008): Deploy initial LHC computing service Purpose: “to provide the computing resources needed to process and analyse the data gathered by the LHC Experiments” Enormous data volumes and computing capacity • 15 PB/yr RAW >50 PB/year overall • Lifetime 10-15 years: Exabyte scale Scalability challenge Clear that a distributed infrastructure was needed
Grid Infrastructure Since early 2000s large international projects funded to deploy production Grid Infrastructures for scientific research Luckily for WLCG, these took a big load building the infrastructure WLCG built on big multi-science production Grid Infrastructures: EGEE, OSG
LHC a big EGEE user Monthly CPU walltime usage per scientific discipline from EGEE Accounting Portal. The LHC: the biggest user of EGEE. There is many ways one can use EGEE: How is the LHC using the Grid?
Tiered Structure It comes from the early days (1998, MONARC). Then mainly motivated for limited network connectivity among sites. Today, the network is not the issue but the Tiered model is still used to organise work and data flows. Tier-0 at CERN: • DAQ and prompt reconstruction. • Long term data curation. Tier-1 (11 centres): Online to DAQ 24x7 • Massive data reconstruction. • Long term storage of RAW data copy. Tier-2 (>150 centres): • End-user analysis and simulation. Computing Models: not all 4 LHC experiments use the Tiered structure the same way
Distribution of resources Experiment computing requirements for the 2009-2010 run at the different WLCG Tiers More than 80% of the resources are outside CERN The Grid MUST work
The operations challenge Scalability Reliability Performance
Scalability The computing and storage capacity needs for WLCG are enormous. Once LHC starts, the growth rate will be impressive. ~100.000 today cores
Reliability Setting up and deploying a robust Operational model is crucial for building reliable services on the Grid. One of the key tools for WLCG comes from EGEE: The Service Availability Monitor
Improving reliability Track site status with time…
Improving reliability ... publish rankings
Improving reliability See how sites reliability goes up! An increasing number of more realistic sensors, plus a powerful monitoring framework that ensures peer pressure, guarantees that reliability of WLCG service will keep improving.
Performance: data volumes 2008 2009 150 TB/day CMS has been transferring 100 – 200 TB per day on the Grid since more than 2 years Last June ATLAS added 4 PB in 11 days to their total of 12 PB on the Grid
http://www.pic.es Port d’Informació Científica (PIC) created in June 2003 4 partners collaboration agreement: DEiU, CIEMAT, UAB, IFAE Data centre supporting scientific research involving analysis of massive sets of distributed data.
http://www.pic.es WLCG Tier-1 since Dec 2003 supporting ATLAS, CMS and LHCb • Targeting to provide 5% of the Tier-1s capacity Computing services for other applications besides LHC • Astroparticles (MAGIC datacenter) • Cosmology (DES, PAU) • Medical Imaging (Neuroradiology) The Tier-1 represents >80% of current resources Goal: Technology transfer from the LHC Tier-1 to other scientific areas facing similar data challenges.
is PIC deliveringas WLCG Tier-1? Scalability Reliability Performance
Scalability CastorEnstore migration In the last 3 years PIC has followed the capacity ramp-ups as pledged in the WLCG MoU 5 fold in CPU >10 fold in Disk and Tape CPU cluster: Torque/Maui • Public tenders purchase model • different technologies integration challenge DDN S2A (SAN) Planned Disk: dCache Tape Sun 4500 servers (DAS)
PIC Tier-1 Reliability Tier-1 reliability targets have been met for most of the months
Performance: T0/T1 transfers CMS data imported from T1s CMS data exported to T1s Target ATLAS+CMS+LHCb ~ 100 MB/s Target ATLAS+CMS+LHCb ~ 210 MB/s Data import from CERN and transfers with other Tier-1s successfully tested above targets ATLAS daily rate CERNPIC June 2009 CMS daily rate CERNPIC June 2009 Target: 76 MB/s Target: 60 MB/s
Challenges ahead Tape is difficult (specially reading …) • Basic use cases have been tested in June, but still tricky • Organisation of files in tape groups – big impact in performance • Sharing of drives/libraries between experiments Access to data from jobs • Support very different jobs: reco (1GB/2h) user (5GB/10min) • Optimisation for all of them is not possible – compromise • Remote open of files vs. copy to the local disk • Read_ahead buffer tunning, disk contention in the WNs Real users (chasing real data) • Simulations of “user analysis” load have been done – Ok • The system has never been tested with 1000s of real users simultaneously accessing data with realistic (random?) patterns
Challenges ahead The Cloud • A new buzzword? • Several demos of running scientific apps on commercial clouds. • Sure we can learn a lot from the associated technologies. Encapsulating jobs as VMs – decouple hardware from applications Multi-science support environments • Many of the WLCG sites also support other EGEE VOs • Requirements can be very different (access control to data, …) • Maintaining high QoS in these heterogeneous environments will be a challenge… but that’s precisely the point.
Thank you Gonzalo Merino, merino@pic.es
Data Analysis The original vision: Application thin layer interacting with a powerful m/w layer (super-WMS to which the user throws input dataset queries plus algorithms and spits the result out) Reality today: LHC experiments have build increasingly sophisticated s/w stacks to interact with the Grid. On top of basic services: CE, SE, FTS, LFC Workload Management: Pilot jobs, late scheduling, VO-steered prioritisation (DIRAC, Alien, Panda …) Data Management: Topology aware higher level tools, capable of managing complex data flows (Phedex, DDM …) User Analysis: Single interface for the whole analysis cycle, hide the complexity of the Grid (Ganga, CRAB, DIRAC, Alien …) To use the Grid at such large scale is not an easy business!
Performance: CPU usage ~ 50.000 simult. busy cores 100.000 ksi2k·month
PIC data centre 150 m2 machine room 200 KVAIT UPS (+diesel generator) ~ 1500 CPU cores batch farm ~1 Petabyte of disk ~2 Petabytes of tape STK-5500 and IBM-3584 tape libraries
End-to-end Throughput Besides growing capacity, one of the challenges for sites is to stand high throughput data rates between components 1 Petabyte tape 1 Petabyte disk 1500 cores cluster
End-to-end Throughput Besides growing capacity, one of the challenges for sites is to stand high throughput data rates between components 1 Petabyte tape 1 Petabyte disk 1500 cores cluster 2,5 GBytes/s peak rates WNs-disk during June-09 10 Gbps
End-to-end Throughput Besides growing capacity, one of the challenges for sites is to stand high throughput data rates between components 1 Petabyte tape 1 Petabyte disk 1500 cores cluster >250 MB/s r+w tape bandwidth demonstrated in June-09 2,5 GBytes/s peak rates WNs-disk during June-09 250 MB/s
Data Transfers to Tier-2s • Reconstructed data sent to the T2s for analysis. Bursty nature. • Experiment requirements very fuzzy for this dataflow (as fast as possible) • Links to all SP/PT Tier-2s certified with 20-100 MB/s sustained • CMS Computing Model: sustained transfers to > 43 T2s worldwide
Reliability: experiments view • Seeing sites reliability improving, experiments were motivated to making their sensors to measure it more realistic. • An increasing number of more realistic sensors, plus a powerful monitoring framework that ensures peer pressure, guarantees that reliability of WLCG service will keep improving.