10 likes | 76 Views
Calorimeters. Muon Detector. Yoke. Tracker. Shielding. Coil. Many millions of events. Various conditions. Many samples. GRID A unified approach. Distributed resources. RICH-2. Moun. Vertex. RICH-1. 10 mm. Many 1000s of computers required. Worldwide collaboration.
E N D
Calorimeters Muon Detector Yoke Tracker Shielding Coil Many millions of events Various conditions Many samples GRID A unified approach Distributed resources RICH-2 Moun Vertex RICH-1 10 mm Many 1000s of computers required Worldwide collaboration Heterogeneous operating systems Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison1, N Brook2, G Patrick3, E van Herwijnen4, on behalf of the LHCb Grid Group and GridPP 1University of Cambridge, 2University of Bristol, 3Rutherford Appleton Laboratory, 4CERN LHC LHCb Experiment Proton beams colliding at an energy of 14TeV 2835 bunches per beam 1011protons/bunch 40MHz crossing rate Weight: 4270 tonnes (Magnet: 1500 tonnes) Dimensions: 18m x 12m x 12m Number of electronic channels: 1.2 million Located 100m underground at the Large Hadron Collider (LHC) Due to start taking data in 2007 LHCb is a particle-physics experiment that will study subtle differences between matter and antimatter. The design and construction of the experiment is being undertaken by some 500 scientists, from 50 institutes, in 14 countries around the world. The experiment will be located 100m underground at the Large Hadron Collider (27km circumference) being built at CERN in Geneva. The decays of more than 109 short-lived particles, known as Bo and anti-Bo mesons, will be studied at LHCb each year. To optimise the detector design, and to understand the physics, many millions of particle interactions must be simulated. The first data are expected in early 2007. Example of decays of Bo and anti-Bo mesons Grid – A Single Resource Grid technology is being deployed to make use of globally distributed computing resources to satisfy the LHCb requirements. A prototype system, based on the existing software, is already operational. This system deals with job submission and execution, data transfer, bookkeeping, and the monitoring of data quality. DataGrid middleware is being utilised as it becomes available. In this way, LHCb is able both to produce the simulated datasets for the detector studies, and to feed back experience and ideas into the design of the Grid. Computing centres on the Grid are being integrated into the LHCb production system as they come on line. These currently include: CERN, IN2P3 (Lyon), CNAF (Bologna), NIKHEF (Amsterdam) and the EU DataGrid Testbed. The UK participates through the University institutes of Bristol, Cambridge, Edinburgh, Glasgow, Imperial College, Liverpool and Oxford, and the Rutherford Appleton Laboratory. Peta Bytes of data storage Job definition and submission is through a web page. A central web server can submit to individual farms using afs, or to the DataGrid Testbed using Grid middleware. A java servlet generates the required job scripts, as well as specifying the random-number seeds and job options. Job submission and monitoring in the LHCb distributed environment are currently performed using PVSS II, a commercial Supervisory Control and Data Acquisition (SCADA) system developed by ETM. The same system has been adopted by the LHC experiments for detector monitoring and control during data taking. A typical simulation job processes 500 events, and produces a dataset with a size of the order of Gigabytes. These data are transferred directly to the CASTOR mass-storage facility at CERN, using the bbftp system. Copies of datasets are stored locally at the larger computer centres, which will also accept data from external sites. Bookkeeping is performed using java classes that interact with a central ORACLE database at CERN via a servlet. The central database will also hold information for simulated datasets stored externally to CERN. Submit jobs remotely via Web Analysis Execute on farm Check data quality Update bookkeeping database Transfer data to mass store