90 likes | 182 Views
Tier 3 and Computing facility @ Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi. HPC facility in the department. 32 node HP blade based cluster Class C BL460 Intel E5450 processor (2 X quad core), 3 GHz, 80 Watts 32 GB RAM 12 X 450 GB storage element
E N D
Tier 3 and Computing facility @ DelhiSatyaki Bhattacharya,KirtiRanjanCDRST, University of Delhi
HPC facility in the department • 32 node HP blade based cluster • Class C • BL460 • Intel E5450 processor (2 X quad core), 3 GHz, 80 Watts • 32 GB RAM • 12 X 450 GB storage element • Gigabit + Infiniband connectivity • We can use good part of it • MOAB for and torque for cluster management, job scheduling, resource management. • Not connected to 10 mbps mpls
Tier 3 status • We have tendered for very similar systems • Rack mount 1U instead of blades • Same processor configuration as the department cluster • from SUN or HP (DL 160 G5 or X4150) • Few nodes but dedicated will be connected to mpls • Similar amount of storage • In advanced stage of purchase • In installation and operation we will gain from our experience with the existing cluster.
GRID connectivity status • The existing 2Mbps direct link was upgraded to 10 Mbps in December ‘08 • Dr. KirtiRanjan asked for demonstration of the bandwidth through real data transfer • 2nd week of March ERNET demonstrated upto 4Mbps link speed by connecting to CDAC, Mumbai (using Infovista) (Kirti/Sushil ran the tests on the DU side) • Mr. Dhekne has commented that while the link gives us possibility of a pipe (or VPN) upto the ERNET PoP the actual transfer rate can depend on server speed, packet route (no. of hops), overall backbone capacity. • Mr. Dhekne also pointed out that till february ‘09 the TIFR CERN link had no “GEANT peering” which meant long packet routes. ERNET says there is no bottleneck • We would like to know about any other test results from other institutes
Co-location of people in CMS Centres • A CMS Centre @ My Institute is a highly-visible local CMS focal point • Status and monitoring displays to follow CMS operations • Computing consoles for students, postdocs and faculty to work together • Physical co-location of people • Video links to CERN and other institutes • Virtual co-location of people • Outreach displays CMS Centre @ DESY LHC @ FNAL Lucas Taylor CHEP 2009, Prague
CMS Centres WorldwideA New Collaborative InfrastructureLucas Taylor, Northeastern UniversityErik Gottschalk, Fermilab Lucas Taylor CHEP 2009, Prague