200 likes | 212 Views
Learn about GridPP's deployment progress, storage group development, and strategic goals for providing SRM interfaces in the UK. Explore short and long-term deployment strategies, collaboration initiatives, and overview of key storage deployments in Manchester and Edinburgh.
E N D
Storage at RAL Service Challenge Meeting 27 Jan 2005
What was GridPP1? • A team that built a working prototype grid of significant scale > 2,000 (9,000) CPUs > 1,000 (5,000) TB of available storage > 1,000 (6,000) simultaneous jobs • A complex project where 88% of the milestones were completed and all metrics were within specification Oversight Committee
GridPP Deployment Status (9/1/05) GridPP deployment is part of LCG (Currently the largest Grid in the world) The future Grid in the UK is dependent upon LCG releases Three Grids on Global scale in HEP (similar functionality) sites CPUs • LCG (GridPP) 90 (16) 9000 (2029) • Grid3 [USA] 29 2800 • NorduGrid 30 3200 Oversight Committee
UK Tier-2 Centres ScotGrid Durham, Edinburgh, Glasgow NorthGrid Daresbury,Lancaster, Liverpool, Manchester, Sheffield SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD,Warwick LondonGrid Brunel, Imperial, QMUL, RHUL, UCL Oversight Committee
Multiple Experiments ATLAS LHCb CMS SAMGrid (FermiLab) BaBar (SLAC) QCDGrid PhenoGrid Oversight Committee
Middleware Development Network Monitoring Configuration Management Grid Data Management Storage Interfaces Information Services Security Oversight Committee
Storage Group • GridPP Storage Group • Development and support • Data and Storage • RAL, Edinburgh, Glasgow
Overall Goals – GridPP2 • Provide SRM interfaces to: • The Atlas Petabyte Storage facility at RAL • Disk (for Tier 1 and 2 in UK) • Disk pools (for Tier 1 and 2 in UK) • Package and support
GridPP in the UK • Tier 1: RAL • Tier 2: ScotGrid, NorthGrid, SouthGrid, London • Each T2 consists of several sites • Support tape at T1 • Disks and disk pools at T1 and T2
Options • RAL-SRM • an SRM interface to ADS developed from EDG-SE • dCache • from DESY-FNAL with SRM interface • DRM – From LBNL • dpm – from EGEE SA1 Deployment
(Short Term) Timeline • Provide a release of SRM to disk and disk pool by end of January 2005 • Was planned to coincide with the EGEE gLite “release” • Was planned to match the path toward the full gLite release • Now focusing more on production…
(Short Term) Strategy • Focus on dCache • Andrew reported on Tier1 work SRM to ADS Storage Element SRM to disk dCache + dCache-SRM SRM to disk pool dCache has seen more testing
Longer Term Strategy • Possibly, dual solution SRM to ADS Storage Element SRM to disk dCache + dCache-SRM SRM to disk pool
Optimising SE Now: write via SE disk ADS SE VTP GridFTP Pipe via SE GridFTP VTP Directly to tapestore ??? ??? But need data protocol supported by both
Longer Term Strategy • Actively interworking with other SRMs • Cross testing, development • SRM Collaboration • GSM within GGF
Acceptance tests • SRM tests – SRM interface must work with: • srmcp (the dCache SRM client) • GFAL • gLite I/O • Disk pool test – must work with • dccp (dCache specific) • plus SRM interface on top
Other Deployments Manchester
Existing Edinburgh System LCFG Server(glenellen) SE Server(glenmorangie) CE Server(glenlivet) NFS mount indirect connection direct network connection WNs Disk Server(glenkinchie) Glenkinchie - 24TB RAID using 24 * 1TB partitions - limited to private networkGlenmorangie - dual PIII 1GHz, 2GB RAM - /storage NFS to glenkinchie partition(s)
Proposed Edinburgh System Classic SE Server(glenmorangie) dCacheSE Server(se) NFS mount Disk Server / dCache Pool Node(glenkinchie) Glenkinchie - patched & connected to Internet - dCache Pool software and GridFTP installed - Classic SE supported until existing data migrated - partitions classed as individual pools giving 24 max
Summary • UK Storage Group • working with GridPP • Supporting SRM solutions in the UK • For Tier1 and Tier2 • and anyone else • Most other communities being steered towards SRB as part of a data management framework.