290 likes | 445 Views
From Zettabytes to Knowledge. IET visits, 15 & 19 April 2010. Wolfgang von Rüden CERN IT Department, Head of CERN openlab. From the International System of Units *. * http://en.wikipedia.org/wiki/Yotta-. CERN’s Tools. The world’s most powerful accelerator : LHC
E N D
From Zettabytes to Knowledge IET visits, 15 & 19 April 2010 Wolfgang von RüdenCERN IT Department, Head of CERN openlab
From the International System of Units * * http://en.wikipedia.org/wiki/Yotta- Wolfgang von Rüden, CERN
CERN’s Tools • The world’s most powerful accelerator: LHC • A 27 km long tunnel filled with high-tech instruments • Equipped with thousands of superconducting magnets • Accelerates particles to energies never before obtained • Produces particle collisions creating microscopic “big bangs” • Very large sophisticated detectors • Four experiments each the size of a cathedral • Hundred million measurement channels each • Data acquisition systems treating Petabytes per second • Top level computing to distribute and analyse the data • A Computing Grid linking ~200 computer centres around the globe • Sufficient computing power and storage to handle the data, making them available to thousands of physicists for analysis Wolfgang von Rüden, CERN
LHC experiments Wolfgang von Rüden, CERN
The “ATLAS” experiment during construction 7000 tons, 150 million sensors, >1 petabyte/s
CMS Closed & Ready for First Beam 3 Sept 2008 Wolfgang von Rüden, CERN
About Zettabytes of raw data… Massive on-line data reduction required to bring the rates down to an acceptable level before storing the data on disk and tape.
The LHC Computing Challenge • Signal/Noise: 10-9 • Data volume • High rate * large number of channels * 4 experiments • 15 PetaBytes of new data each year • Compute power • Event complexity * Nb. events * thousands users • 100 k of (today's) fastest CPUs • 45 PB of disk storage • Worldwide analysis & funding • Computing funding locally in major regions & countries • Efficient analysis everywhere • GRID technology
simulation Data Handling and Computation for Physics Analysis reconstruction event filter (selection & reconstruction) detector analysis processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis les.robertson@cern.ch Wolfgang von Rüden, CERN
Proton acceleration and collision Protons are accelerated by several machines up to their final energy (7+7 TeV*) Head-on collisions are produced right in the centre of a detector, which records the new particle being produced Such collisions take place 40 million times per second, day and night, for about 150 days per year * In 2010-11 only 3.5 + 3.5 TeV
Tier 0 at CERN: Acquisition, First pass processingStorage & Distribution
Tier 0 – Tier 1 – Tier 2 • Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution • Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis • Tier-2 (~130 centres): • Simulation • End-user analysis
Fibre cut near Basel Data transfer • Full experiment rate needed is 650 MB/s • Desire capability to sustain twice that to allow for Tier 1 sites to shutdown and recover • Have demonstrated far in excess of that • All experiments exceeded required rates for extended periods, & simultaneously • All Tier 1s have exceeded their target acceptance rates Wolfgang von Rüden
The Worldwide LHC Computing • The LHC Grid Service is a worldwide collaboration between: • 4 LHC experiments and • ~200computer centres that contribute resources • International grid projects providing software and services • The collaboration is brought together by a MoU that: • Commits resources for the coming years • Agrees a certain level of service availability and reliability • As of today 33 countries have signed the MoU: • CERN (Tier 0) + 11 large Tier 1 sites • 132 Tier 2 sites in 64 “federations” • Other sites are expected to participate but without formal commitment
The very first beam-splash event from the LHC in ATLAS on 10:19, 10th September 2008
Capacity of CERN’s data centre (Tier0) • Compute Nodes: • ~7000 systems • 41’000 cores • Disk storage: • 14 Petabyte (>20 soon) • 60’000 disk drives • Tape storage: • Capacity: 48 Petabyte • In use: 24 Petabyte • Corresponds to ~15% of the total capacity in WLCG Wolfgang von Rüden