100 likes | 191 Views
Overview of UK HEP Grid. S.L.Lloyd Grid/Globus Meeting Oxford 21/2/01. Historical Overview Current Facilities Proposed evolution Proposal to PPARC. Historical Perspective. Historical Development of Computing Facilities for HEP in the UK:
E N D
Overview of UK HEP Grid S.L.Lloyd Grid/Globus Meeting Oxford 21/2/01 • Historical Overview • Current Facilities • Proposed evolution • Proposal to PPARC Overview of UK HEP Grid
Historical Perspective • Historical Development of Computing Facilities for HEP in the UK: • For many years - Central Facility at RAL (funded via CNAP) plus Group facilities at the Institutes (funded by Grants). • Recently supplemented by large facilities at some Universities for specific experiments (funded by large awards from other funding schemes - JIF, JREI etc). Overview of UK HEP Grid
Current Facilities • Central Facilities at RAL: • 20 dual 450 MHz + 40 dual 600 MHz + (March) 30-40 dual 1GHz • About to install a 330 TB Capacity Robot initially using 30-40 TB • 2 TB disk space • Supports all UK HEP Experiments (to varying degrees) • Liverpool (MAP) • 300 350 MHz + 4 TB disk • Monte Carlo production for ATLAS, LHCb, DELPHI . . Overview of UK HEP Grid
BaBar • At RAL • 1 x 6 CPU Sun E6500 + 5 TB disk • 1 x 4 CPU Sun E4500 + 4 x 4 CPU Sun 450 • At Edinburgh • 1 x 2 CPU Sun e420 + 0.5 TB disk • 1 x Sun Ultra50 + 4 x 4 CPU Sun Ultra80 • At IC, Liverpool, Manchester, Bristol • 1 x 3/4 Sun e450 + 1TB disk • 1 x 80 CPU Linux farm • At Birmingham, QMW, RHUL • 1 x 2 CPU Sun e420 + 0.5 TB disk • 1 x 80 CPU Linux farm • At Brunel • 1 x 2 CPU Sun e420 + 0.5 TB disk Overview of UK HEP Grid
FermiLab Experiments • CDF/Minos • At Oxford, Glasgow, UCL • 1 x 8 cpu + 1 TB Disk • At RAL • 2 x 8 cpu + 5 TB Disk • D0 • At Lancaster • 200 733 MHz + 2.5 TB • Tape robot - 600 TB Capacity, 30 TB loaded Overview of UK HEP Grid
LHC • In Scotland • 128 CPU at Glasgow • 5 TB Datastore + server at Edinburgh • ATLAS/LHCb • At IC • 80 CPU + 1 TB • CMS • At Birmingham • 13 cpu + 1 TB disk • ALICE • . . . Overview of UK HEP Grid
Current Summary • In summary we have shared central facilities at RAL and several distributed facilities for specific experiments • In general not yet Grid aware. • In addition many groups have fledgling grid nodes - a few PCs as Gateways, CPU servers and disk servers running Globus. • Aim is to integrate all these facilities into one 'Grid for Particle Physics' • Prepare Middleware and Testbeds for LHC • Stress test using LHC mock data challenges and real analysis of current experiments. Overview of UK HEP Grid
Evolving Model • Prototype Tier-1 Regional Centre at RAL (Old numbers need updating) : • 10% of 2001 on order. • ~ 4 Regional Tier-2 Centres. • Scotland, North West, 'Midlands', London. • ~ 16 Tier-3 Centres at each Institute. Overview of UK HEP Grid
Grid Collaboration • A Collaboration Board (One/Institute) has been formed to bid against PPARC's (£26M) e-Science Money (Chair SLL). • Proposal Board to write the proposal - Experimental Representatives + Work Group Contacts (Chair PWJ). • Expect to form a number of Work Groups to develop the tools (middleware etc) required • probably based on DataGrid WPs (but not necessarily). Overview of UK HEP Grid
Proposal • Aim to include: • All UK HEP Computing activities • DataGrid contributions • Collaboration with CERN • Collaboration with NSF in US (ex DataGrid) • Cross disciplinary activities, CS, Industry etc • Timescale is very short - Submit by 2nd April. • Expected that most of the money will be manpower + Tier-1 hardware. • Important to get as much Tier-2/3 hardware from SRIF, JREI etc as possible. Overview of UK HEP Grid