1 / 13

IT-INFN-CNAF Status Update

This update provides information on the completed expansion of the computing center infrastructure at INFN CNAF, the main computing facility of the INFN. It includes details on the computing resources, storage capacity, and network connectivity.

fegan
Download Presentation

IT-INFN-CNAF Status Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IT-INFN-CNAFStatus Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani Stefano Zani INFN CNAF (TIER1 Staff)

  2. INFN CNAF • CNAF is the main computing facility of the INFN • Core business: TIER1 for all the LHC Experiments (Atlas, CMS, Alice and LHCB) • Other Activities: Provide Computing Resources for several other experiments in which INFN is involved (CDF, Babar, Virgo, SuperB, ..) • R&D on GRID middleware development Stefano Zani INFN CNAF (TIER1 Staff)

  3. The Computing facility Infrastructure Computing center infrastructure expansion has been completed in spring 2009 • More than 130 Racks are in place and the cooling system is capable of dissipating the heat produced by 2MW of installed computing devices. • The total Electrical Distribution available for the Center is about 5MW • The total “Protected Electrical Power” is about 3.4 MW (2 Rotary UPS + 2 Diesel Engines) Stefano Zani INFN CNAF (TIER1 Staff)

  4. Computing Resources “at a glance” • COMPUTING NODES • More than 2800 cores (6.6 MSpecINT2K or 23000 HEP Spec) • 1U “Standard” Servers (1 Mother Board 2 Quad Core - 8 Cores in 1U) • 1U Twin Servers (2 Mother Boards with 2 Quad Core -16 Cores in 1 U) • Blade Chassis (With 16 Servers in 10 U – 12.8 Cores in 1 U with integrated I/O Modules and management system) • SORAGE • 2,5 PB of Disk Capacity • SAN (EMC2+SUN) Served by 200 Disk Servers (STORM+GPFS) • 5 PB of TAPE Capacity (Expandable Up to 10PB ) served by 20 Drives • NETWORK • 3 Core Switch/Routers • 60 Gigabit aggregation Switches • 20 Gb/s WAN Connectivity Stefano Zani INFN CNAF (TIER1 Staff)

  5. INFN CNAF TIER1 Network RAL PIC TRIUMPH 7600 • T1-T1’s (BNL,FNAL,TW-ASGC,NDGF) • T1-T2’s • CNAF General purpose LHC-OPN dedicated link 10Gb/s T0-T1 (CERN) T1-T1 (PIC,RAL,TRIUMPH) LHC-OPN (Cross Border) CNAF-KIT CNAF-IN2P3 CNAF-SARA T0-T1 BACKUP 10Gb/s • Storage Servers • Disk Servers • Castor Stagers Storage Devices Fiber Channel 2x1Gb/s Extreme Summit450 Extreme Summit400 Extreme Summit450 Extreme Summit450 Extreme Summit400 4x1Gb/s Worker Nodes SAN 2x10Gb/s 2x1Gb/s In Case of network Congestion: Uplink upgrade from 4 x 1Gb/s to 10 Gb/s or 2x10Gb/s 4x1Gb/s Worker Nodes Worker Nodes WAN GARR 10Gb/s 10Gb/s 2x10Gb/s Exterme BD10808 4x10Gb/s Exterme BD8810 Stefano Zani INFN CNAF (TIER1 Staff) S.Z. 12-2009

  6. 7600 Extreme Summit400 Monitoring WAN GARR • Monitoring • MDM Devices are on line and connected to the core switch since summer ’09. (Recently fixed the GPS issue with an antenna signal splitter) • Internal monitoring done with: • MRTG (Each port of the net) • Nagios managing alerts with automatic SMS on critical events • Netflow/Sflow For traffic accounting GARR 10Gb/s 10Gb/s 2x10Gb/s Exterme BD10808 4x10Gb/s • MDM Servers Exterme BD8810 Stefano Zani INFN CNAF (TIER1 Staff)

  7. T1 WAN Connectivity evolution (Wishes) T1 Connectivity • Wishes to have a dedicated link for all the T1-T1 connectivity via a T1 or NAP well connected to US T1s. For resiliency reasons it would be better to avoid using CERN for this task. All other T1s (T1 or NAP) CNAF-IT-T1 …But solutions to have a second link through CERN seems to be more feasible in brief time and GARR is facing the possibility to start testing a 100 Gb/s Between CNAF and CERN via GEANT before the end of 2010. Stefano Zani INFN CNAF (TIER1 Staff)

  8. LNL Milan Turin Pisa Bari Rome Naples IP To other T1s Or T2s Catania GARR-X Light Path T2-CNAF TIER2 T2 WAN Connectivity evolution Italian TIER-2’s Bari, Catania, Milan, Naples, Legnaro (PD), Pisa, Rome and Turin. Currently connected via GARR with 1Gb/s connections CNAF (T1) Italian T2’s WAN interconnections are evolving from 1 Gb/s to10 Gb/s GARR-X accesses (Q4 2010) Each T2 needs an IP connection in order to reach all other T1s and T2s via GEANT and INFN is asking GARR a dedicated light paths for the traffic between INFN CNAF and all the Italian T2s. CNAF TIER1 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 8

  9. Next steps on LAN side.. • New Cisco Nexus 7000 installation as a CORE (end of January) switch for the TIER1 resources • GridFTP Servers and SE Direct 10Gb/s CORE Connection • Virtual Nodes on demand and their interconnection issues to be handled.. Stefano Zani INFN CNAF (TIER1 Staff)

  10. That’s it Questions? Stefano Zani INFN CNAF (TIER1 Staff)

  11. Backup Slides Stefano Zani INFN CNAF (TIER1 Staff)

  12. Network Layout Internet LHC OPN sw-101-13-02 sw-101-02-01 sw-101-07 sw-101-01-01 sw-101-13-01 sw-101-02-02 sw-101-01-02 sw-101-02-03 sw-101-11 sw-101-10 Cisco 7606 sw-101-02-04 104-01-01 Cisco 6500 104-01-02 sw-101-09 104-02-01 104-02-02 sw-f-01 BD10K BD8810 104-03-01 104-03-02 sw-08-02 sw-104-10 sw-08-03 sw-104-11 sw-08-04 sw-104-13 sw-08-05 sw-104-14 sw-08-06 sw-103-01-01 sw-103-02-01 sw-08-07 sw-104-15 sw-103-03-02 sw-103-03-01 sw-08-08 sw-102-15 sw-103-13 sw-103-03-04 sw-103-03-03 sw-08-09 sw-09-06 sw-103-14 sw-103-05-02 sw-103-05-01 sw-103-15 sw-103-05-04 sw-103-05-03 farmswg3 sw-103-09 sw-104-05 sw-103-07-02 sw-103-07-01 farmswf1 sw-103-10 sw-104-07 sw-103-07-04 sw-103-07-03 farmswf2 sw-103-11 sw-104-09 2 x 1 Gbps 4 x 1 Gbps Stefano Zani INFN CNAF (TIER1 Staff) 1 x 10 Gbps

  13. Stefano Zani INFN CNAF (TIER1 Staff)

More Related