1 / 22

ScotGRID Report: Prototype for Tier-2 Centre for LHC

Akram Khan On Behalf of the ScotGRID Team (http:/www.scotgrid.ac.uk). ScotGRID Report: Prototype for Tier-2 Centre for LHC. Overview of Talk. What are we hoping to do..?. Hardware / Operation. Future Plans. Misc Bits. Summary & Outlook. 2000: JREI Bid. The LHC Computing

jane-mcgee
Download Presentation

ScotGRID Report: Prototype for Tier-2 Centre for LHC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Akram Khan On Behalf of the ScotGRID Team (http:/www.scotgrid.ac.uk) ScotGRID Report: Prototype for Tier-2 Centre for LHC

  2. Overview of Talk What are we hoping to do..? Hardware / Operation Future Plans Misc Bits Summary & Outlook ScotGRID Report

  3. 2000: JREI Bid The LHC Computing Challenge for Scotland Never Forget The Spirit of the Project The JREI funds will make possible to commission and fully exercise a prototype LHC computing centre in Scotland The Centre would provide: Technical service based for the grid(GIIS, VO services…) DataStore to handle samples of data towards part. Analysis Significant simulation production capability Excellent network connection RAL + regional sites Support grid middle devel. with CERN and RAL Support core software devel. within LHCb and ATLAS Support user applications in other scientific areas • This will enable us to answer: • Is the grid viable solution for LHC computing challenge • Can a two-site Tier-2 centre be setup and operate effectively • How can network topology between Ed,GL, RAL & CERN ScotGRID Report

  4. ScotGRID: Glasgow / Edinburgh • 59 x330 dual PIII 1GHz/2 Gbyte compute nodes • 2 x340 dual PIII/1 GHz /2 Gbyte head nodes • 3 x340 dual PIII/1 GHz/2 Gbyte storage nodes, each with 11 by 34 Gbytes in Raid 5 • 1 x340 dual PIII/1 GHz/0.5 Gbyte masternode • xSeries quad Pentium Xeon 700 MHz/16 Gbytes, server • 1 FAStT 500 controller • 7 diskarrays of 10 x 73 Gb disk ScotGRID Report

  5. ScotGRID - Glasgow ScotGRID Report

  6. ScotGRID: Glasgow - Schematic Masternode Storage Nodes Head Nodes Campus Backbone bottleneck Internet VLAN 10.0.0.0 VLAN 100 Mbps 1000 Mbps Compute Nodes ScotGRID Report

  7. ScotGRID: Edinburgh - Schematic Disk Arrays(Total 4.6 Tb) SRIF Network FastT 500 Storage Controller Server (4*Pentium Xeon, 16Gb RAM) ScotGRID Report

  8. Towards a Prototype Tier-2 Prototypes Glasgow: MC-FARM xCAT tutorial, attempt on masternode ScotGRID room handed over to builders Building work complete xCAT reinstall User registration, trail production Group disk (re)organisation to match project Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 ScotGRID delivery of Kit: Dec 2001 2003 2004 2005 2001 2002 Proposal JREI: 2000 Group disk (re)organisation to match project User registration, Upgrade storage controller Reconfiguring kernel drivers for FAStT storage controller Edinburgh: Datestore Configuring disk array Installation of Software ScotGRID Report

  9. ScotGRID 1st Year Review ScotGrid Meeting at IBM Briefing Centre (Greenock) Friday 10th Jan 9:45 Arrive - Coffee 10:00-10:15 Welcome (Freddie Moran) 10:15-10:35 ScotGrid Introduction (Tony Doyle) 10:35-10:50 Technical Status Overview (Akram Khan) 10:50-11:05 Cluster Operations (David Martin) 11:05-11:30 Coffee 11:30-11:50 ScotGrid Upgrade Plans (Steve Playfer) 11:50-13:00 IBM IT Briefing Discussion 13:00-14:00 Lunch 14:00-14:30 IBM IT Briefing Discussion 14:40-14:55 Grid Data Management - simulations (David Cameron) 15:10-15:30 Tea Particle Physics Applications 15:30-15:45 ATLAS (John Kennedy) 15:45-16:00 LHCb (Akram Khan) 16:00-16:15 BABAR (Steve Playfer) 16:15-16:30 CDF (Rick St Denis) Complete Success as you will see! ScotGRID Report

  10. ScotGRID Statistics The amount of storage space in ScotGRID:used by each group Glasgow (600 Gbytes) Edinburgh (5TBytes) ScotGRID Report

  11. startup phase • Christmas period • different applications ScotGRID:CPU Usage 24/6/02 – 6/1/2003 The % use by each group over the pervious weeks ScotGRID Report

  12. ScotGrid JREI project includes a mid-term hardware upgrade. • As part of GridPP planning, we need to upgrade from Prototype to Production Tier 2 status by 2004. • JREI funding left to be spent by June 2003: • Edinburgh £220k • Glasgow £30k £250k Forward Look: Introduction ScotGRID Report

  13. IBM@server: xSeries 440 8* Xeon (1.9GHz) Scalable configuration Edinburgh kit Glasgow Dual FastT700 + 20-32 TB Forward Look: Possible Upgrade Plan? ScotGRID Report

  14. Forward Look: Front-End Grid Servers Would like to install Grid software on dedicated (modest-sized) servers. Decouples Grid software from Compute and Storage hardware. • Front end for EDG style Compute Engine/LCFG • Front end for EDG style Storage Engine • Overall ScotGrid Front end to arbitrate Grid services being requested? Will there be a standard configuration for Grid access to Tier 2 sites? (RLS/SlashGrid) ScotGRID Report

  15. Towards a Production Tier-2 & beyond Links to other applications… Production Future Upgrades ? Production Tier-2 Site Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 2003 2004 2005 2001 2002 Start of GridPP-II End of JREI funding Start of ScotGRID-II Delivery of more Kit… ScotGRID Report

  16. Technical Support Group Core members of the group & invited to discuss wider issues: CORE: Akram Khan (Chair: Edinburgh) David Martin (sysadm: Glasgow) Roy de Ruiter-Koelemeiger (sysadm: Edinburgh) Gavin McCance (EDG: Glasgow) RA post (EDG: Edinburgh) INVITED: Paul Mitchell (sysadm: Edinburgh) Alan J. Flavell (Networking: Glasgow) Steve Traylen (EDG: RAL) IBM Team Webpage “technical group” of http://www.scotgrid.ac.uk/ Support is a real issue we are just about ok but for a production Tier-2? ScotGRID Report

  17. Packet filtering All University traffic 1 Gb/s 1 Gb/s Edinburgh Glasgow 2.5 Gb/s Traceroute 194.36.1.1 (194.36.1.1) 1.479 ms 0.743 ms 0.558 ms 130.209.2.1 (130.209.2.1) 2.343 ms 0.678 ms 0.577 ms 130.209.2.118 (130.209.2.118) 0.577 ms 0.322 ms 0.454 ms glasgow-bar.ja.net (146.97.40.105) 0.564 ms 0.305 ms 0.341 ms po9-0.glas-scr.ja.net (146.97.35.53) 0.546 ms 0.544 ms 0.465 ms po3-0.edin-scr.ja.net (146.97.33.62) 1.644 ms 1.471 ms 1.634 ms po0-0.edinburgh-bar.ja.net (146.97.35.62) 1.509 ms 1.474 ms 1.400 ms 146.97.40.62 (146.97.40.62) 1.622 ms 1.493 ms 1.518 ms vlan686.kb5-msfc.net.ed.ac.uk (194.81.56.58) 2.084 ms 2.528 ms 1.869 ms 129.215.255.242 (129.215.255.242) 1.851 ms 1.828 ms 1.624 ms ScotGRID Report

  18. EDG Middleware: Replica Optimiser Simulation • Using ScotGrid for large-scale simulation runs. • uses ~15MB memory for ~60 threads. • 2-12 hours/simulation • Results to appear in IJHPCA 2003. ScotGRID Report

  19. ScotGrid (= edin) 8 Million Events in 3 weeks BaBar: Monte Carlo Production (SP4) Expect to import some streams/skims to Edinburgh in 2003 After the upgrade to ~30TB there may be interest in using ScotGrid to add to the storage available at the RAL Tier A site ScotGRID Report

  20. LHCb: Production Centres • CERN (932 k) and Bologna (857 k) • RAL (471 k) • Imperial College and Karlsruhe (437 k) • Lyon (202 k) • ScotGrid (194 k) • Cambridge (100 k) • Bristol (92 k) • Moscow (87 k) • Liverpool (70 k) • Barcelona (56 k) • Rio (32 k) • CESGA (28 k) • Oxford (25 k) We can be confident for the TDR production and in 56 days with the current configuration we can produce 10 Million events (March-April 2003). Included in Draft of LHCC document: B0->J/phi K0s ScotGRID Report

  21. Exciting time for ScotGRID:There has been a lot of effort during the past year to get ScotGRID up and operational – we have learnt many ticks! Operational Prototype Centre: • We have an operational centre • Meeting the short term needs of the applications with modest resources (HEP + Middleware + non-PP) • Proof of Principle for Tier-2 Operation (pre-grid) There is a lot that needs still to the done:having a full production system (24*7) (opt-grid) to prototype various architectural solutions for Tier-2  look towards upgrades with a view for LHC timetableSupport & Resources are a real issue for the near term future (Q1-2004) Summary and Outlook ScotGRID Report

  22. Multiply indexed LRC for higher availability LRC indexed by only one RLI RLI indexing over the full namespace (all LRCs are indexed) RLI indexing over a subset of LRCs RLS Architecture A Replica Location Service (RLS) is system that maintains and provides access to information about the physical location of copies of data items. • Gavin McCance • Alasdair Earl • Akram Khan • (starting Feb) Replica Location Indices RLI RLI RLI Local Replica Catalogues LRC on Storage Element LRC on Storage Element LRC on Storage Element LRC on Storage Element Glasgow Edinburgh CERN ScotGRID Report

More Related