60 likes | 205 Views
LCG. Service Challenge at INFN Tier1. CNAF, 9 March 2005. T.Ferrari, L.d.Agnello, G.Lore, S.Zani INFN-CNAF. FarmSW2(Dell). FarmSW3(IBM). FarmSWtest. FarmSWG1. FarmSWG3. FarmSW12. FarmSW1. SW-05-09. SW-05-03. SW-05-01. SW-05-02. SW-05-08. SW-05-05. SW-05-06. SW-05-07. SW-06-06.
E N D
LCG Service Challenge at INFN Tier1 CNAF, 9 March 2005 T.Ferrari, L.d.Agnello, G.Lore, S.Zani INFN-CNAF
FarmSW2(Dell) FarmSW3(IBM) FarmSWtest FarmSWG1 FarmSWG3 FarmSW12 FarmSW1 SW-05-09 SW-05-03 SW-05-01 SW-05-02 SW-05-08 SW-05-05 SW-05-06 SW-05-07 SW-06-06 SW-05-04 SW-06-01 SW-03-06 SW-08-04 SW-04-06 SW-04-07 SW-04-08 SW-04-09 SW-03-07 SW-04-10 LHCBSW1 HP Babar FarmSW4(IBM3) Catalyst3550 ServSW2 Catalyst3550 Farmsw4 Catalyst3550 CNAF Network Setup GARR Italian Research Network 1Gb/s Production link 1Gb/s Dedicated to SC Cisco 7600 ServiceCahllenge Summit 400 Back Door 1Gb/s CNAF Internal Services 10Gb/s BD ER16 n x 10Gb/s
LCG SC Dedicated Network CERN • WAN • Dedicated 1Gb/s link connected NOW via GARR+Geant (test it with CERN started) • 10 Gb/s link available in September ‘05 • LAN • Extreme Summit 400 (48xGE+2x10GE) dedicated to Service Challenge. GARR Italian Research Network 10Gb in September 2005 1Gb Summit 400 48 Gb/s+2x10Gb/s 11 Servers + Internal HDs
LCG Transfer cluster (1) • Delivered (with 1 month of delay) 11 SUN Fire V20 Dual Opteron (2,2 Ghz) • 2x 73 GB U320 SCSI HD • 2x Gbit Ethernet interfaces • 2x PCI-X Slots • OS: SLC304 (arch x86_64), the kernel is 2.4.21-27 Installation completed. Tests bonnie++/IOzone on local disks ~60 MB/s r and w. Tests Netperf / Iperf on LAN ~ 950 Mb/s • Globus (GridFTP) v2.4.3 Installed on all cluster nodes • All the certificates arrived the 8-3-2005 First functionality test ASAP. • CASTOR SRM v1.2.12,Stager CASTORv1.7.1.5. Installation and tests within the first half of March.
LCG Transfer cluster (2) • Ten of these machines will be used as GridFTP/SRM servers, 1 as CASTOR Stager/SRM-repository/Radiant-sw. Preliminary WAN Iperf/GridFTP tests within the 2th week of March. • The load balancing will be realized assigning to a Cname the IP addresses of all the 10 servers and using the DNS round-robin algorithm. • Real sustained transfers in the 2th half of March, our minimal target is a disk2disk throughput of 100 MB/s between CNAF and CERN.
LCG Storage • For SC2 we plan to use the internal disks (are 70x10=700GB enough?). • We can also provide different kinds of high performances Fiber Channel SAN attached devices (Infortrend Enstor, IBM Fast-T900,..). • For SC3 we can add CASTOR tape servers with IBM LTO2 or STK 9940B drives.