90 likes | 195 Views
Data Storage , Network, Handling, and Clustering in CDF Korea group. Intae Yu*, Junghyun Kim, Ilsung Cho Sungkyunkwan University (SKKU) Kihyeon Cho, Youngdo Oh Center for High Energy Physics (CHEP) Kyungpook National University (KNU) Bockjoo Kim Seoul National University (SNU) Jysoo Lee
E N D
Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung Cho Sungkyunkwan University (SKKU) Kihyeon Cho, Youngdo Oh Center for High Energy Physics (CHEP) Kyungpook National University (KNU) Bockjoo Kim Seoul National University (SNU) Jysoo Lee KISTI, Supercomputing Center International Workshop on HEP Data Grid Nov 9, 2002, KNU
CDF (Collider Detector at Fermilab) • Proton (1TeV) – Antiproton (1TeV) Collider Experiment (~600 members) • Collision Event Rate : 2.5 MHz • Collision Runs : 2001-2005 (Run IIa) 2005-2008 (Run IIb) International Workshop on HEP Data Grid Nov 9, 2002, KNU
CDF Run IIa • Data Characteristics Event Rate to Tape : 300 Hz Raw (Compressed) Data Size : 250 (~100) KB/event Number of Events : ~ events/year • Requirements for Data Analysis 700 TB of Disk and 5 THz of CPU assuming 200 simultaneous users • Upgraded CAF (Central Analysis Facility) may not have enough CPUs and disk storage to handle CDF data International Workshop on HEP Data Grid Nov 9, 2002, KNU
CDF Grid • In Run IIb, expect 6~7 times more data (~3 PB/ year) • CDF needs distributed disk storage and computing systems among collaborators via fast network HEP DataGRID CDF GRID working group was formed in March, 2002 Research on SAM (Sequential data Access via Metadata) with D0 group • CDF Korea Group DCAF (DeCentralized Analysis Farm) in Korea : KCAF (Dr. Cho’s Talk) CHEP/KNU : KCAF (Tier 1) SNU, SKKU (Tier 2) International Workshop on HEP Data Grid Nov 9, 2002, KNU
Network CHEP/KNU Fermilab/CDF KOREN 155Mbps APII 45Mbps SNU SKKU • Network Test CHEP - SNU, CHEP - SKKU: ~ 40 Mbps CHEP - CDF : ~20 Mbps Enough Bandwidth for Grid Research and Test International Workshop on HEP Data Grid Nov 9, 2002, KNU
Status (KCAF) • KCAF Construction 6 work nodes (12 CPU) for initial tests more work nodes by the end of this year 1 TB disk storage 1 +(4) TB Disk for the network buffer between CHEP and CDF • KCAF Software Linux based CDF software installed and tested PBS installed and working successfully • Main project : CDF Monte Carlo data production International Workshop on HEP Data Grid Nov 9, 2002, KNU
Status CHEP/KNU SKKU Work Nodes KOREN ~ 40 Mbps Disk Storage Server • In SKKU, a disk storage server (~0.5 TB) is constructed • Second disk storage server (~ 2TB) by the end of this month International Workshop on HEP Data Grid Nov 9, 2002, KNU
Status • Job submission from SKKU to CHEP work nodes / Output data transfer from CHEP to the SKKU data storage server is planned using NFS or Globus SE • Grid Software and Middleware Globus 2.0 : 10 nodes (CHEP), 1 node (SNU), 1 node (Fermilab) Construct Private CA GridFTP, Replica Catalog, Replica Management installed Test Grid testbed : CHEP - SNU International Workshop on HEP Data Grid Nov 9, 2002, KNU
Prospects • KCAF at CHEP (Tier 1) ~ 10% of CDF Run II data processing and storage (~500 TB and ~200 nodes) (both real and MC data) • SNU/SKKU (Tier 2) Disk Storage System (~ 50 TB) Local Clusters (mini-KCAF) (~60 nodes) • Network CHEP-Fermilab : ~ 70 Mbps for Run IIb required CHEP-SNU/SKKU: ~ 40 Mbps • Implementation of CDF data processing in full Grid environment International Workshop on HEP Data Grid Nov 9, 2002, KNU