180 likes | 189 Views
This report provides an update on the collaborative R&D studies between CC-IN2P3 and ICEPP for the WLCG. It focuses on the network connection and data transfer challenges in the ATLAS experiment and highlights the progress made in 2006 and the plans for 2007.
E N D
Status report onLHC_2: ATLAS computing Tetsuro Mashimo International Center for Elementary Particle Physics (ICEPP), The University of Tokyo on behalf of the LHC_2 project team Workshop FJPPL’07 May 9, 2007 @KEK, Japan
LHC_2 in the year 2006 • Collaboration between the IN2P3 Computing Center in Lyon (CC-IN2P3) and the ‘Regional Center’ at ICEPP, the University of Tokyo • Purpose: various R&D studies for WLCG (Worldwide LHC Computing Grid) • In WLCG, CC-IN2P3 as a ‘Tier-1’ center and ICEPP as a ‘Tier-2’ center • The Tier-2 center at ICEPP is for the ATLAS experiment only • Especially important: • Establish a network connection with a high bandwidth and efficient data transfer to produce physics results quickly • It is challenging to fully exploit the available bandwidth due to the large latency for a long distance connection (Round Trip Time (RTT) ~ 280msec)
ATLAS Detector Construction & Installation A few PB of raw data (each year) Diameter 25 m Barrel toroid length 26 m End-cap end-wall chamber span 46 m Overall weight 7000 Tons Detector sensors 110M channels
LCG-France sites • Supported LHC experiments • All sites also support other virtual organizations
LCG-France sites • Tier-2: GRIF • CEA/DAPNIA • LAL • LLR • LPNHE • IPNO • Tier-2: GRIF • CEA/DAPNIA • LAL • LPNHE Tier-3: IPHC Strasbourg Ile de France Tier-3: IPNL Nantes Tier-2: Subatech Tier-3: LAPP Clermont-Ferrand Tier-2: LPC Annecy Lyon Tier-1: CC-IN2P3 AF: CC-IN2P3 Marseille Tier-3: CPPM
Roumanie Pekin Pekin Tokyo ATLAS FR Cloud • Tier-2: GRIF • CEA/DAPNIA • LAL • LPNHE Ile de France Nantes Tier-3: LAPP Tier-2: LPC Annecy Tier-1: CC-IN2P3 AF: CC-IN2P3 Marseille Tier-3: CPPM
LHC_2 in the year 2006 (cont’d) • Members in the project team (* leader)
Activities in 2006 • Mainly tests for data transfer • In the overall ATLAS framework: `SC4’ (Service Challenge 4) • Also special tests • Communications mainly by e-mail (Visits in February and March 2007)
SC4 (Lyon → Tokyo) • RTT (Round Trip Time) ~ 280 msec • The available bandwidth limited to 1 Gbps • Linux kernel 2.4, no tuning, standard LCG middleware (GridFTP) • ~ 20 MB/s (15 files parallel, each 10 streams) Not satisfactory (packet loss)
Test with iperf (memory to memory) • Linux kernel 2.6.17.7 • Congestion control • TCP Reno vs. BIC TCP • Try also PSPacer 2.0.1 (from AIST, Tsukuba) • Best result: BIC TCP + PSPacer Tokyo → Lyon: >800 Mbps (with 2 streams): shown below
Summary for iperf results • SL(C)4 (kernel 2.6 with BIC TCP): much better in congestion control than SL3 (kernel 2.4) • Software Pacer (PSPacer by AIST) in addition gives a stable and good performance
LHC_2 in the year 2007 (cont’d) • Members increased (new members in green,* leader)
Year 2007 • Not only purely technical R&Ds, but also studies for data movement in view of physicists’ point of view • The available network bandwidth will increase (probably this year) • A new computer system has been installed at ICEPP and more man power is soon available for technical studies • More intensive R&D for this year toward the LHC start-up
Tape Robot Disk Arrays PC Servers