100 likes | 121 Views
CMS-HI Offline Computing. Charles F. Maguire Vanderbilt University For the CMS-HI US Institutions. CMS-HI at the LHC for High Density QCD with Heavy Ions. LHC: New Energy Frontier for Relativistic Heavy Ion Physics Quantum Chromodynamics at extreme conditions (density, temperature, …)
E N D
CMS-HI Offline Computing Charles F. MaguireVanderbilt University For the CMS-HI US Institutions Draft Version 3 on May 12 at 1:05 PM CDT
CMS-HI at the LHC forHigh Density QCD with Heavy Ions • LHC: New Energy Frontier for Relativistic Heavy Ion Physics • Quantum Chromodynamics at extreme conditions (density, temperature, …) • Pb+Pb collisions at 5.5 TeV, thirty times larger than Au+Au at RHIC • Expecting a longer-lived Quark Gluon Plasma state accompanied by much enhanced yields of hard probes with high mass and/or transverse momentum • CMS: Excels as a Heavy Ion Collisions Detector at the LHC • Sophisticated high level trigger for getting rare important events at a rapid rate • Best momentum resolution and tracking granularity • Large acceptance tracking and calorimetry --> proven jet finding in HI events Draft Version 3 on May 12 at 1:05 PM CDT
CMS-HI in the US • 10 Participating Institutions • Colorado, Iowa, Kansas, LANL, Maryland, Minnesota,MIT, UC Davis, UIC, Vanderbilt • Projected to contribute ~60 FTEs (Ph.D. and students) as of 2012 • MIT as lead US institution with Boleslaw Wyslouch as Project Manager • US-CMS-HI Tasks (Research Management Plan in 2007) • Completion of HLT CPU Farm Upgrade for HI events at CMS (MIT) • Construction of the Zero Degree Calorimeter at CMS (Kansas and Iowa) • Development of the CMS-HI Compute Center in the US (task force established) • Task force recommended that the Vanderbilt University group lead the proposal composition • Also want to retain the expertise developed by the MIT HI group at their CMS Tier2 site • CMS-HI Compute Proposal to DOE is Due This Month • Will be reviewed by ESnet managers, and also by external consultants Draft Version 3 on May 12 at 1:05 PM CDT
Basis of Compute Model for CMS-HI • Heavy Ion Data Operations for CMS at the LHC • Heavy ion collisions expected in 2009, second year of running for the LHC • Heavy ion running takes place during a 1 month period (106 seconds) • At designed DAQ bandwidth the CMS detector will be writing 225 TBytesof raw data per heavy ion running period, plus ~75 TBytes support files • Raw data will likely stay resident at CERN Tier0 disks for a few days at most,while transfers take place to the CMS-HI compute center in the US • Possibility to have 300 TBytes of disk dedicated to HI data (NSF REDDnet project) • Raw data will not be reconstructed at the Tier0, but will be written to awrite-only (emergency archive) tape system before deletion from Tier0 disks • Projected Data Volumes 2009 - 2011 (optimistic scenario) • 50 - 100 TBytes in first year of HI operations • 100 - 200 TBytes in second year of HI operations • 300 TBytes nominal size achieved in third year of HI operations Draft Version 3 on May 12 at 1:05 PM CDT
Data Transport Options for CMS-HIFollowing ESnet-NP 2008 Workshop Recommendations • CMS-HEP Raw Data Transport from CERN to FNAL Tier1 • Using LHCnet to cross the Atlantic • LHCnet terminating at Starlight HUP • ESnet transport from Starlight into FNAL Tier1 centre • Links are rated at 10 Gbps • CMS-HI Raw Data Transport from CERN to US Compute Center • Network topology has not been established at this time • Vanderbilt is establishing a 10 Gbps path to SOX-Atlanta for end of 2008 • Network Options (DOE requires a non-LHCnet backup plan) • Use LHCnet to Starlight during one month when HI beams are being collidedTransport data from Starlight to Vanderbilt compute center via ESnet/Internet2Transfer links will still be rated at 10 Gbps to transfer data within ~1 month • Do not use LHCnet but use other trans-Atlantic links supported by NSF, with links rated at 10 Gpbs such that data are transferred over ~1 month • Install 300 TByte disk buffer at CERN and use non-LHCnet trans-Atlantic links to transfer data over 4 months (US-Alice model) at ~2.5 Gbps Draft Version 3 on May 12 at 1:05 PM CDT
Data Transport Options for CMS-HIFollowing ESnet-NP 2008 Workshop Recommendations • Figures for Network Topology Into Vanderbilt via SOX-Atlanta • To be available on Tuesday May 13 at the latest Draft Version 3 on May 12 at 1:05 PM CDT
Data Transport Issues for CMS-HIFollowing ESnet-NP 2008 Workshop Recommendations • To Use LHCnet or Not To Use LHCnet • Use of LHCnet to US, following CMS-HEP path, is the simplest approach • A separate trans-Atlantic link will require dedicated CMS-HI certifications • DOE requires a non-LHCnet plan be discussed in CMS-HI compute proposal • Issues With the Use of LHCnet by CMS-HI • ESnet officials believe that LHCnet is already fully subscribed by FNAL • HI month was supposed to be used for getting final sets HEP data transferred • FNAL was quoted as having only 5% “headroom” left with use of LHCnet • HI data volume is 10% of the HEP data volume • Issues With the Non-Use of LHCnet by CMS-HI • Non-use of LHCnet would be a new, unverified path for data out of the LHC • CERN computer management would have to approve (same for US-ALICE) • Installing a 300 Tbyte disk buffer system at CERN (ESnet recommendation)would also have to approved and integrated into the CERN Tier0 operations Draft Version 3 on May 12 at 1:05 PM CDT
Tape Archive for CMS-HI?Raw Data and Processed Data for Each Year • US Raw Data Tape Archive for CMS-HI: FNAL or Vanderbilt? • Original suggestion was that the raw data archive be at VanderbiltVanderbilt already has a tape robot system, can upgrade to 1.5 more PBytes • Possibly more economical to archive the raw (and processed) data at FNAL?Savings are a result of not having a person at VU dedicated to tape service • Cost savings result in more CPUs for the CMS-HI compute center • Issues for Tape Archive Decision • Is there really a cost savings as anticipated [will be quoting actual $ number] ? • What specific new burdens are being assumed at FNAL? • Serving raw data twice per year to Vanderbilt for reconstruction (see next slide) • Receiving processed data annually for archiving into tape system • Is the tape archive decision coupled with the use/non-use of LHCnet? Draft Version 3 on May 12 at 1:05 PM CDT
Data Processing for CMS-HI • Data Processing Scenario • [Physics event choices, using Gunther’s tables] • [Reconstruction and analysis pass schedules] • [Role and access of other CMS-HI institutions] Draft Version 3 on May 12 at 1:05 PM CDT
Implementation of CMS-HI Compute Center • Total Hardware Resources Ultimately Required • [Number of CPUs (SpecInt Units)] • [Disk space and models of use] • [Tape space] • Construction Scenario • [Plan to reach the totals above in 5 years] • [Compatibility with the expected data accumulation] • [Operations plan] Draft Version 3 on May 12 at 1:05 PM CDT