120 likes | 198 Views
CCJ Computing Center in Japan for spin physics at RHIC. T. Ichihara , Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto (1) , S. Sawada (2) , H. Hamagaki (3) RIKEN , RBRC (1) ,KEK (2) , CNS (3)
E N D
CCJComputing Center in Japan for spin physics at RHIC T. Ichihara, Y. Watanabe, S. Yokkaichi, O. Jinnouchi, N. Saito, H. En’yo, M. Ishihara,Y.Goto(1) , S. Sawada(2), H. Hamagaki(3) RIKEN, RBRC(1),KEK(2), CNS(3) Presented on 17th October 2001 at First Joint meeting of the Nuclear Physics Division of APS and JPS (Hawaii 2001)
RIKEN CCJ : Overview • Scope • Center for the analysis ofRHIC Spin Physics • Principal site of computing for PHENIXsimulation • PHENIX CC-J is aiming at coveringmost of the simulation tasks of the whole PHENIX experiments • Regionalcomputing center in Japan • Size • Data amount: handing 225 TB /year • Disk Storage : ~20 TB, Tape Library: ~500 TB • CPU performance : 13 k SPECint95(~300 Pentium 3/4 CPU) • Schedule • R&D for the CC-J started in April ‘98 at RBRC in BNL • Construction began in April ‘99 over athree years period • CCJ started operation in June 2000 • CCJ will reach full scale in2002
Annual Data amount DST 150 TB micro-DST 45 TB Simulated Data 30 TB Total225 TB Hierarchical Storage System Handle data amount of 225 TB/year Total I/O bandwidth: 112 MB/s HPSS system Disk storage system 15 TB capacity All RAID system I/O bandwidth: 520 MB/s System Requirement for CCJ • CPU( SPECint95) • Simulation 8200 • Sim. Reconst 1300 • Sim. ana. 170 • Theor. Mode 800 • Data Analysis 2000 • Total 12470 SPECint95 • ( = 120K SPECint2000) • Data Accessibility(exchange 225 TB /year) • Data Duplication Facility (tapecartridge) • Wide Area Network (IMnet/APAN/ESnet) • Software Enviroment • AFS, Objectivity/DB, Batch Queuing System
Requirements as a Regional Computing Center • Software Environment • Software environment of the CCJ needed be compatible to the PHENIX software environment at the RHIC Computing Facility (RCF) at BNL • AFS (/afs/rhic) • Remote access over the network is very slow and unstable (AFS sever : BNL) • daily mirroring at CCJ and accessed via NFS from Linux farms • Objectivity/DB • Remote access over the network is very slow and unstable (AMS server: BNL) • Local AMS serverat CCJ(regularly update DB) • Batch Queuing System: Load Sharing Facility (LSF 4.1) • RedHat Linux 6.1 (Kernel2.2.19 withnfsv3 enabled), gcc-2.95.3 etc. • Veritas File System (Journaling file system on Solaris): free from fsck for ~TB disk • Data Accessibility • Need to exchange data of 225 TB/year to BNL RCF • Most part of the data exchange is carried out with SD3 tapecartridges • Data duplicating Facility at both (10 MB/s performance by airbone) • Some part of the data exchange is carried out over theWide Area Network (WAN) • CC-J will use Asia-Pacific Advanced Network (APAN) for US-Japan connection • http://www.apan.net/ • APAN has currently ~100 Mbps bandwidth for Japan-US(STAR TAP) connection • 3 MB/s transfer performance was obtained using bbftp protocol
File Transfer over WAN • RIKEN - (Imnet) - APAN (~100 Mbps) -startap- ESnet - BNL • Round Trip Time(RTT) for RIKEN-BNL :170 ms • RFC1323 (TCP Extensions for high performance, May 1992) describes the method of using large TCP window-size (> 64 KB) • FTP performance : 290 KB/s (64 KB windowsize), 640 KB/s(512KB window) • Large TCP-window size is necessary to obtain high-transfer rate BBFTP (http://doc.in2p3.fr/bbftp/) • Software designed to quickly transfer files accross a wide area network. • It has been written for the babar experiment in order to transfer big files • transferring data through several parallel tcp streams • RIKEN-BNL bbftp Performance (10 tcp streams for each session) • 1.5 MB/s (1-sessinon) 3 MB/s (3-sessions) MRTG graph during bbftp transfer between RIEKN and BNL
Analysis for the first PHENIX experiment with CCJ • Quark Matter 2001 conference (Jan 2001) • 21 papers using CCJ / total 33 PHENIX papers • Simulation • 100k event simulation/reconst. (Hayashi) • DC Simulation (Jane) • Muon Moc Data Challenge (MDC) (Satohiro) 100k Au+Au(June 2001) • DST Production • 1 TB of Raw Data (Year-1) was transferred via WAN from BNL to RIKEN • PHENIX official DST (v03) production (Sawada) (Dec. 2000) • about 40% of DSTs produced at CCJ and transferred to BNL RCF • Detector calibration/simulation • TOF -(Kiyomochi,Chujo, Suzuki) • EMCal - (Torii,Oyama, Matsumoto) • RICH -(Akiba,Shigaki, Sakaguchi) • Physics/simulation • Single electron spectrum - (Akiba, Hachiya) • Hadoron particle ratio/spectrum ( Ohnishi, Chujo) • Photon [π0 spectrum] (Oyama, Sakaguchi)
CCJ Operation • Operation, maintenance and development of CC-J are carried out under the charge of the CCJ Planning and Coordinate Office (PCO).
Summary • Construction of the CCJ started in 1999 • The CCJ operation started in June 2000 at 1/3 scale. • 43 user’s account created. • Hardware/upgrade and software improvement : • 224 Linux CPU(90k Specint2000), 21 TB disk, 100TB HPSS Tape library • Data Duplicating Facility at BNL RCF started operation in Dec 2000. • PHENIX Year -1 experiment : summer in 2000 • Analysis of the PHENIX first experiment with CCJ • Official DST (v03) Product ion • Simulations • Detector calibration/simulation • Physics analysis/simulation • Quark Matter 2001 conference (Jan 2001) • 21 papers using CCJ / total 33 PHENIX papers • PHENIX Year 2 (2001) Run started in August 2001 • Spin experiments at RHIC will start in December 2001 !!