1 / 2

CMS @ GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer

CMS @ GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer. 2 PB. GridKa took its share of CMS T1 computing:

donagh
Download Presentation

CMS @ GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMS @ GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer 2 PB GridKa took its share of CMS T1 computing: - data import from T0 (250 TB of 2000 TB total) - MC import from T2 (200 TB of 2000 TB total) - and data processing(typically 10% of jobs) - data export to T1 and T2(900 TB of 8000 TB total) Since Aug. 2010 also MC production @ T1s including GridKa Some general („cooling incident“) and CMS-specific problems (dCache head node, re-configuration of VOMS-roles) caused efficiency loss Overall: first LHC data taking period successfully mastered at GridKa !keeping efficiency high requires careful and steady monitoringsupport and fast reaction by GridKa staff essential • Migration of data from old to new dCache instance finished on Nov. 23rd almost 500'000 files (600 TB) copied • Data consistency check showed only minor problems • Import of first LHC collision data without any problems • Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s • Marian Zvada started his job as CMS admin on Feb. 1st, cofinanced by GridKa and EKP(BMBF) • Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking • GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: • Migration of data from old to new dCache instance finished on Nov. 23rd almost 500'000 files (600 TB) copied • Data consistency check showed only minor problems • Import of first LHC collision data without any problems • Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s • Marian Zvada started his job as CMS admin on Feb. 1st, cofinanced by GridKa and EKP(BMBF) • Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking • GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: • Migration of data from old to new dCache instance finished on Nov. 23rd almost 500'000 files (600 TB) copied • Data consistency check showed only minor problems • Import of first LHC collision data without any problems • Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s • Marian Zvada started his job as CMS admin on Feb. 1st, cofinanced by GridKa and EKP(BMBF) • Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking • GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful: • Migration of data from old to new dCache instance finished on Nov. 23rd almost 500'000 files (600 TB) copied • Data consistency check showed only minor problems • Import of first LHC collision data without any problems • Smooth running over Christmas – low job activity, but ~30 TB import of MC from Tier2s • Marian Zvada started his job as CMS admin on Feb. 1st, cofinanced by GridKa and EKP(BMBF) • Have taken up frequent „GridKa monitoring shifts“ during initial phase of LHC data taking • GridKa was ready for first 7 TeV collision data, first re-processing and data distribution campain very successful:

  2. D-CMS @ GridKa Summer 2010 Jobs by German CMS users (D-CMS) play an increasingly significant role at GridKa • Dedicated jobs slots for VOMS role DCMS had been used since long German CMS computing team porposed and implemented concept to usenational resources @ GridKa • New: - tape and disk storage made availale by GridKa - twiki for German CMS users set up - improved monitoring: publish data sets available on disk at GridKa - disk-only space to be used as temporary storage for local and remote job output for user analyis - tape storage to be accessed by German data admins only to archive analysis data sets ~20% of CMS jobs in September were D-CMS National Jobs @ GridKa in September

More Related