150 likes | 215 Views
US LHC Tier-2s on behalf of US ATLAS, US CMS, OSG. Ruth Pordes , Fermilab Nov 17th, 2007.
E N D
US LHC Tier-2son behalf of US ATLAS, US CMS, OSG Ruth Pordes , Fermilab Nov 17th, 2007 Supported by the Department of Energy Office of Science SciDAC-2 program from the High Energy Physics, Nuclear Physics and Advanced Software and Computing Research programs, and the National Science Foundation Math and Physical Sciences, Office of CyberInfrastructure and Office of International Science and Engineering Directorates.
Issues • Fast ramp up stress on purchasing and operational teams. • ATLAS Targets in 2010,2011 not met by current plans.
All US LHC Tier-2s are part of OSG Experiments responsible for end-to-end systems. Operations Center dispatches and “owns” problems til they are solved. Activities provide common forums for s/w technical & operational issues. User Science Codes and Interfaces Applications VO Middleware Astrophysics Data replication etc Biology Portals, databases etc ATLAS and CMS software and services installed on sites through Grid interfaces. HEP Data and workflow management etc OSG Release Cache: OSG specific configurations, utilities, etc Infrastructure Virtual Data Toolkit (VDT) Core Grid Technologies + stakeholder needs: Condor, Globus, MyProxy: shared with and support for EGEE, TeraGrid, accounting, authz, monitoring, VOMS and others. Resource Batch queue configurations ensure priority to ATLAS or CMS jobs. Existing Operating, Batch systems and Utilities
US LHC Tier-2s are fully integrated into the experiments • All sites are funded through the US NSF research program, except for DOE support for SLAC. • Provide monte-carlo processing and analysis/mc data hosting and CPU. • Distribute data to/from Tier-1s. Provide analysis centers for Tier-3->N physicists • All Tier-2s successfully accounting to the WLCG Tier-2 accounting reports. US ATLAS and CMS Tier-2s contributing at least their share to the ATLAS analysis challenge and the CMS Challenge for Software and Analysis (CSA07).
ATLAS report - Michael Ernst, Rob Gardner • Robust data distribution supported by BNL Tier-1. • Support Panda pilot job infrastructure with DQ servers locally or remotely. • Athena analysis framework available locally. • Facility Integration program provides forum for Tier-2 administrators to communicate and have common solutions. • Computing Integration and Operations meetings and mail lists are effective forums. • Tier-2 workshops semi-annually • help newer Tier-2s get up to speed more quickly. • Mix of dCache and xrootd based storage elements.
Atlas Concerns • Performance and scalability of dCache for analysis I/O needs. • End-to-end performance of data distribution and management tools.
ATLAS Tier-1 distributes data to Tier2-s • Data distribution driven by Tier-2 processing and analysis needs. e.g. BNL to University of Chicago data distribution
ATLAS Jobs US 33% UTA 96% Walltime Efficiency
US CMS - report from Ken Bloom, Tier-2 coordinator • Funding of $500K/site provides 2 FTE/site for support. • Site specific configurations (e.g. different batch systems) but all sites have common approaches. • All use dCache to manage the storage. 1 FTE (of the 2) per site needed for this support.
CMS Concerns • Robustness and performance of T1 sites hosting data • All Tier-1s serve data to US Tier-2s. The majority of CMS data will live across an ocean. Reliability is crucial. • Will the grid be sufficiently robust when we get to analysis? • Can user jobs get to the sites, and can the output get back? • Are we being challenged enough in advance of 2000 users showing up?
Summary • US LHC Tier-2s are full participants in the US ATLAS, US CMS and OSG organizations. • Communications between the collaborations and the projects is good. • The 2 collaborations use each others resources when capacity is available. • Mechanisms are in place that priorities don’t get inverted! • The Tier-2s are ready to contribute to CCRC and data commissioning.