70 likes | 169 Views
News from Alberto et al. Fibers document separated from the rest of the computing resources https://edms.cern.ch/document/1155044/1 https://edms.cern.ch/document/1158953/1 Documents finalized (end of August) Power requirements: 100 kW total Racks
E N D
News from Alberto et al. • Fibers document separated from the rest of the computing resources • https://edms.cern.ch/document/1155044/1 • https://edms.cern.ch/document/1158953/1 • Documents finalized (end of August) • Power requirements: 100 kW total • Racks • Racks on Jura side: 800 mm depth (standard CERN racks) • Racks on Saleve side: 1000 mm depth • Dimensions of racks’ base plate is needed (for the supports)
Power • Computing nodes: • 9kW/rack • 2 to 5 racks • Storage nodes: • 5kW/rack • 3 racks • Support: • 1/2 rack, 4kW • DCS,DSS • 4 kW • 2 racks • Network: 4kW • GTK: • 3 kW/rack • 3 racks • + cooling doors: 1 kW/door, 10kW • Total: 68 kW (include 50% safety margin: 95 kW)
Computing nodes • How much total computing power for 2012? • Specify AMD or INTEL? • How many processors? How many cores? • Intel Westmere architecture (32 nm): • Xeon 5600, 6cores • X series, up to 3.33 GHz • E series, up to 2.66 GHz • L series, low-consumption, up to 2.26 GHz • CERN Openlabtested system: • 2×X5670@2.93 GHz, 16 cores • 2×6 GB RAM, 450 W power load (full load), • 238 HEPSPEC06 (24 processes)=20/core • In one 9 kW rack, 20 systems, 4760 HEPSPEC06, approximately 1250 kSi2k • How much RAM? How much local disk space? • How many 10 Gb and 1 Gb ports/machine?(How many switches?)
Storage nodes • How much disk for 2012? • Availability of CMS hardware • 12 disk arrays with 12 disks each and redundant fiber channel interface: • 120 disks WD raptor 300GB (bought at the beginning of the year) • 24 disks 1TB • 2x 32 port fiberchannel switch • 10 DELL servers 2950 with redundant fiber channel cards. • Questions: • More technical details asked to our CMS colleagues • Cost? • Maintenance?
Procurement • Racks and cooling doors • Difficult/not possible to join CERN-IT tender • Network apparata • Handled by CERN-IT, installation, management, maintenance included • Computing & storage nodes • How to handle the purchase/acquisition? (CERN, Mainz?...) • Time plan for purchase and delivery?
Man power • Who and when is going to do/follow/check: • Installation • Commissioning • Operations • More: • What about UPS system purchase/installation? • Remote monitoring of cooling doors (discussion going on with DCS central team