260 likes | 433 Views
LHCb is Beautiful?. Glenn Patrick GridPP19, 29 August 2007. In the beginning…. LHCb – GridPP1 Era (May 2002). Empty!. LHCb – GridPP2 Era (Mar 2005). Not Beautiful!. p. p. LHCb December 2006. Getting Pretty!. RICH1 VELO. Trackers. Calorimeters. Muon. RICH2. Magnet. 2008
E N D
LHCb is Beautiful? Glenn Patrick GridPP19, 29 August 2007
LHCb – GridPP2 Era (Mar 2005) Not Beautiful!
p p LHCb December 2006 Getting Pretty! RICH1 VELO Trackers Calorimeters Muon RICH2 Magnet
2008 Suddenly Beautiful! B0 B0 b b 1000 million B mesons/year d d Summer 2008 – Beauty at Last?
Compute Element Storage Element mss Local disk Job Data globus-url-copy Data register-local-file publish CERN TESTBED Replica Catalogue NIKHEF - Amsterdam REST-OF-GRID replica-get Job Storage Element Data …and so it is with the Grid? Origins of Grid for LHCb … GridPP at NeSc Opening – 25 April 2002
checkData Job JDL Job Receiver Data Optimizer Job Receiver Job Receiver Job Input Job JDL Sandbox JobDB LFC Task Queue checkJob Agent Monitor getReplicas WMS Admin Job Monitor Agent Director Matcher Pilot Job checkPilot SE getProxy RB RB RB CE JDL uploadData VO-box getSandbox DIRAC services putRequest Job Wrapper CE LCG services User Application execute (glexec) WN fork Workload On WN DIRAC WMS Evolution (2006) Pilot Agent
User interfaces Job monitor Production manager GANGA UI User CLI BK query webpage FileCatalog browser BookkeepingSvc FileCatalogSvc DIRAC Job Management Service DIRAC services JobMonitorSvc InformationSvc MonitoringSvc JobAccountingSvc AccountingDB Agent Agent Agent DIRAC resources DIRAC Storage LCG Resource Broker DIRAC Sites CE 3 DIRAC CE gridftp bbftp DIRAC CE DIRAC CE DiskFile CE 2 CE 1 rfio DIRAC Production & Analysis DIRAC1: started 19.12.2002 DIRAC3 (data ready): due 2007 GridPP: Gennady Kuznetsov (RAL) – DIRAC Production Tools
GUI Collective & Resource Grid Services GANGA Histograms Monitoring Results JobOptions Algorithms GAUDI Program GANGA: Gaudi ANd Grid Alliance - 2001 First ideas… Pere Mato: LHCb Workshop, Bologna, 15 June 2001 GridPP - Alexander Soroko (Oxford) Karl Harrison (Cambridge) Ulrik Egede (Imperial) Alvin Tan (Birmingham)
Ganga Evolution: 2001-2007 Replaces LHCb Experiment neutral ATLAS AthenaMC (Production) Athena (Simulation/Digitisation/ Reconstruction/Analysis) Gauss/Boole/Brunel/DaVinci (Simulation/Digitisation/ Reconstruction/Analysis) Executable Local PBS LSF OSG PANDA NorduGrid LHCb WMS US-ATLAS WMS
Scriptor Job details Logical Folders Job Monitoring Job builder Log window Ganga 2007: Elegant Beauty? Screenshot of the Ganga GUI CERN, September 2005 Cambridge, January 2006 Edinburgh, January 2007
Ganga Users - 2007 Other LHCb ATLAS 806 unique users since 1 Jan 2007: LHCb=162 unique users
Ganga by Domain - 2007 Other CERN
Exists Planned LHCb “Grid” - circa 2001 Initial LHCb-UK “Testbed” CERN pcrd25.cern.ch lxplus009.cern.ch RAL CSF 120 Linux cpu IBM 3494 tape robot Institutes LIVERPOOL MAP 300 Linux cpu RAL (PPD) Bristol RAL DataGrid Testbed Imperial College GLASGOW/ EDINBURGH “Proto-Tier 2” Oxford Cambridge
HLT Software 40 kHz Level-1 Software 1 MHz Level-0 Hardware 40 MHz LHCb Computing Model 2 kHz@30 kB/event 60MB/s
Monte Carlo Simulation 2007 700M events simulated since May 2006. 1.5M jobs submitted Record of 9715 simultaneous jobs over 70+ sites on 28 Feb 2007 Raja Nandakumar (RAL)
CERN RAL NIKHEF CNAF IN2P3 Reconstruction & Stripping - 2007 …but not so often we get all Tier 1 centres working together. Peak of 439 jobs.
Data Management - 2007 • Production jobs upload output to associated Tier 1 SE (i.e. RAL in UK). • Multiple “Failover” SE and Multiple VO Boxes used in case of failure. • Replication done via FTS and centralised Transfer DB. eScience PhD: Andrew Smith (Edinburgh)
Data Transfer - 2007 • RAW data replicated from Tier 0 to one of six Tier 1 sites. • gLite FTS used for T0 – T1 replication. • Transfers trigger automated job submission for reconstruction. • Sustained total rate of 40MB/s required (and achieved). 50 Further DAQ –T0 – T1 throughput tests at 42MB/s aggregate rate scheduled later in 2007.
volhcb01 AMGA AMGA Client Read BookkeepingSvc BookkeepingQuery AMGA Client BK Service Tomcat Oracle DB Web Browser Read Read R/W R/W Servlet Bookkeeping (2007) GridPP: Carmine Cioffi (Oxford)
LHCb CPU Use 2005-2007 Swiss CERN Germany Spain UK France Italy Many thanks to: Birmingham, Bristol, Brunel, Cambridge, Durham, Edinburgh, Glasgow, Imperial, Lancaster, Liverpool, Manchester, Oxford, QMUL, RAL, RHUL, Sheffield and all others.
UKI Evolution for LHCb 2004 Tier 1 2007 London NorthGrid Tier 1 SouthGrid ScotGrid
2004-2007 2007-2008 GridPP3: Final Crucial Step(s) 2001-2004 Beauty! GridPP3 2008-2011
Some 2007-2008 Milestones Sustain DAQ-T0–T1 throughput tests at 40+ MB/s. Reprocessing (second pass) of data at Tier 1 centres. Prioritisation of analysis, reconstruction and stripping jobs (all at Tier 1 for LHCb). CASTOR has to work reliably for all service classes! Ramp up of hardware resources in UK. Alignment. Monte-Carlo done with perfectly positioned detectors…. reality will be different! Calibration. Monte-Carlo done with “well understood” detectors… reality will be different! Distributed Conditions Database plays vital role. Analysis. Increasing load of individual users.
Lyn Evans GridPP3 The End (and the Start) EPS Conference on High Energy Physics, Manchester 23 July 2007