180 likes | 197 Views
CMS Computing at TIFR (T2_IN_TIFR). Gobinda Majumder , Kajari Mazumdar, Brij Kishore Jashal, Puneet Patel. We are in a New Era in Fundamental Science.
E N D
CMS Computing at TIFR • (T2_IN_TIFR) Gobinda Majumder, Kajari Mazumdar, Brij Kishore Jashal, Puneet Patel
We are in a New Era in Fundamental Science The Large Hadron Collider (LHC), one of the largest and truly global scientific projects ever, is a turning point in particle physics. LHCb CMS ALICE ATLAS CMS LHC ring: 27 km circumference
Collisions at the Large Hadron Collider Beam size ~ 5.5 cm 15 μm 15 μm 6.51012 eV Beam Energy Luminosity 2.01034 cm2 s1 3564 (2556) Bunches/Beam 1.71011 Protons/Bunch Proton 5.5 m (50 ns) 6.5 TeV Proton colliding beams << Hz 5.4103 Hz Bunch Crossing 2510 7 Hz - e e q µ + - Proton Collisions 210 9 Hz 1 µ - ~ q q Z ~ p g H p p p Parton Collisions ~ q Z µ + q ~ 0 µ 2 - New Particle Production ~ 0 1 (Higgs, SUSY, ....)
Complexity of LHC experiments When 2 very high energy protons will collide at LHC, many particles are produced in many parton parton interactions. About 80 million electrical signals will have to be recorded in tiny fraction of a second, repeatedly for a long time (about 10 years). Using computers, a digital image is created for each such instance. A camera is taking pictures in each 25 ns and each picture has data size of 80M. Image size is about 2 MB on average, but varies considerably. But most of these pictures are not interesting! Good things are always rare! ~15PB data in a year
The grid hierarchical model : (1998) • MONARC describes a hierarchy of sites and roles • Tier0: where data comes from and is first reconstructed • Tier1: national centres, meant for running simulation and for real data reprocessing • Tier2: regional centres, meant for analysis
India-CMS Tier2 LHC grid computing center at TIFR • Activities has started in ~2005, but… • By mid-2009, T2_IN_TIFR has commissioned links to all CMS T1s. During June-July, 2009, CMS production team used this T2 for good amount of MC production in Summer09 series. CMS management agreed to credit TIFR groups for the T2 service, which is accounted in mandatory service jobs. By early 2009, physics data started being hosted at T2_IN_TIFR, followed by physics analysis jobs being performed at our T2, based on those data. Final analysis (e.g., Presentable plots, etc.) is done at T3 setup at TIFR. Physicists from other institutes are also given storage area and user accounts in T3.
Evolution of GRID computing centre at TIFR Numbers in bracket are in fraction of CMS total resources
T2_IN_TIFR • 14 server racks • 100 KVA UPS + Isolation transformer • Fire system • Cooling • Networking – 10G + 10G WAN Links Fire 100 KVA UPS 30 min backup IT
India-CMS Grid computing center at TIFR Resources at present 2018 • T2_IN_TIFR • Torque/PBS/Cream-CE • DPM (Disk Pool Manager) • T3_IN_TIFRCloud (Dynamic resources site ) • HTCondor • MS Azure (Grid ASCII Helper Protocol ) GAHP • Combining other clusters (IISER, SINP……..) • Local T3 cluster • 200 cores • HTCondor • 200 TB dedicated user storage • NFS Pledge Resources for (2019)
Monte Carlo event generation + Analysis jobs 9 Billion events processed from Jan 2018 Oct 2018: T2_IN_TIFR T3_IN_TIFRCloud Run 2017: No of events processed by Good jobs ~ 1 Billion From 06-05-2017 to 06-06-2017
Grid computing at TIFR: Evolution of Network • Major force behind the development of NKN and Indian R&E Network. • 1 G dedicated P2P link from TIFR CERN (2009). • Upgraded to 2G in 2012 Upgraded to 4G in 2014 . • Implemented fall-back path using 10G shared TEIN link to Amsterdam (2015). • CERN P2P link Upgraded to 8G (2015) • Implemented LHCONE peering and L3VRF over NKN, all collaborating Indian institutes (2015-2016). • Upgraded to full 10G dedicated circuit till CERN (2017). • NKN implemented CERN PoP with 10G link. (2018). • At present (10G + 10G ) active links to LHC network. • TIFR first institute to have 10G end point. • Dedicated L3 peering to US West coast via Singapore and Amsterdam . • Network for Run-III => ~40 G International circuit
India-LHC L3 VPN on NKN ~ 100 active users accounts from collaborating Indian Institutes • Collaborating Indian Institutes connected on NKN • TIFR, Mumbai (CMS WLCG Site) • VECC, Kolkata (Alice WLCG Site) • BARC, Mumbai • Delhi University, New Delhi • SINP, Kolkata • Punjab University, Chandigarh • IIT Mumbai, Mumbai • IIT, Chennai) • RRCAT, Indore • IIT, Bhubaneswar • IPR, Ahmedabad • NISER, Bhubneshwar • IOP, Bhubneshwar • Vishva-Bharti University (Santiniketan, WB) • IISER, Pune
R&D : TIFR HEP Cloud • Dynamic Resources for WLCG Commissioned in May-2017. • Collaboration with Microsoft Azure: Azure infrastructure with Grant in terms of resources. • MS Cloud Datacentres, three in India (Mumbai, Chennai, Pune ) • Development of tools and technologies for interfacing WLCG Grid with Azure ( Grid ASCII Helper Protocol and Condor Annex) • Successfully processed 1 Billion Physics events in 30 days run. • TIFR earned additional service credits from CMS • Resources seamlessly integrated with WLCG • Adding 0 to 10K cores in global pool under 10 minutes. • TIFR-Caltech Bilateral collaboration on joint operations and various R&D projects • TIFR-ATCF (Asia Tier Centre Forum) • Improving network connectivity and building support community in Asia.
LHC upgrade and upgrade of CMS computing system CMS recorded 150.5 fb-1 in Run 2, with an overall efficiency of 92.5%
Conclusion • TIFR GRID computing system is one of major T2 centre of CMS • Hierarchy ordering is diluted in grid system and now T2_IN_TIFR connect directly with CERN and other T1/T2 centre. • In parallel we are supporting all Indian students to store their output and provide analysis platform. • We are going to increase its capabilities in next few years to cope with large demand of the CMS in future • We are able to manage our system reasonably well and this computing center can host data of any other large scale experiment too.