130 likes | 246 Views
User Board. Glenn Patrick GridPP20, 11 March 2008. Tier 1: Non-Grid Access. Classical PBS/qsub access to Tier 1 restricted on 21 Feb. Access to UI also restricted. List of exclusions agreed through UB. ATLAS 4 identifiers BABAR 9 CALICE 2 CMS 7 Dteam 1 LHCb 4
E N D
User Board Glenn Patrick GridPP20, 11 March 2008
Tier 1: Non-Grid Access • Classical PBS/qsub access to Tier 1 restricted on 21 Feb. Access to UI also restricted. • List of exclusions agreed through UB. ATLAS 4 identifiers BABAR 9 CALICE 2 CMS 7 Dteam 1 LHCb 4 MINOS 24 (3 months after working Castor reduces to 8) TOTAL 51 • Includes some AFS accounts.
Tier 1 Squeeze – 2008/Q1 CPU March CPU capacity = 1439 KSI2K March Requests = 2640 KSI2K CPU over-allocated for 2008/Q1 Pain spread by fairshare system. Headroom ATLAS ALICE • DISK • March Disk capacity = 922TB • March Disk requests = 920.8TB • March Headroom =1.2TB • ATLAS/CMS/LHCb got 70% of their request. • BaBar reduced from 100TB to ~41TB. • All other experiments frozen until March. • “Special measures” taken through the quarter. • Living dangerously!
Tier 1 Disk Squeeze BaBar(49TB) LHCb (116TB) CMS (242TB) ATLAS (291TB) ALICE (5.9TB)
Tier 1 CPU Fairshares LHCb ...Allocated CMS BaBar ATLAS ALICE LHCb CMS BaBar ATLAS Reality…
LHC approaches! Friday 7 March 2008 CMS Plenary 25 Feb Machine cold by 1 June? Protons could be injected by mid-June.
HEALTH WARNING! Plus others…
Weighing things up… Need to get to robust running of LHC experiments (plus others). Assume CCRC08 covered in other talks, but still some way to go on this. Tier 1 Jobs Yesterday CMS BaBar ATLAS
Tier 1 - 2008 Ramp Up Current Allocated/Total Dec 2008 Request 2008 Request CPU (67%) Tape (54%) Disk (39%) Latest procurement should satisfy all experiment 2008 requests if they don’t change. Need to worry about 2009 now.
dCache – Castor2 Migration Timeline 20 June. At the UB meeting it was agreed that 6 month notice be given for dCache termination. 26 November. Proposal to terminate dCache by end of May. Castor Data Going to be Tight…. LHCb – all disk data migrated and 60% of tape data. ATLAS – disk and tape migration ongoing (?). On 20 Feb, 12TB trimmed from CMS allocation to help ATLAS migration (270TB allocation+20TB). ALICE – Updated request received 25 Jan. Allocated one server on 6 February. Need xrootD plug-in. MINOS – agreed 3 month period from date of working Castor instance.
User Support Posts - GridPP3 Janusz Martnyiak (Imperial) = 50% FTE Ex-portal post. Technical assistance with Grid related software, interfacing experiments to middleware, development of tools, etc. First priority is to help smaller non-LHC experiments to get established on the Grid. LHC projects, generic PP tools (eg. Ganga) and KE with non-HEP VOs also possible work areas. Experiments bid through UB chair for support. MICE (Ganga and LFC) already approved and some SuperNemo work (LFC). Stephen Burke (RAL) = 50% FTE Documentation post. Focussed on immediate and short-term issues. For example, helping answer technical enquiries (outside ticket system), trouble-shooting user/VO problems, locating suitable documentation, etc.
The End (and The Start) GridPP2 GridPP3