350 likes | 520 Views
August 26-28, 2004. The 3rd International Workshop on HEP Data Grid. HEP Data Grid in Korea. Kihyeon Cho Center for High Energy Physics Kyungpook National University and Hyoungwoo Park Supercomputing Center, KISTI (On behalf of the HEP Data Grid Working Group in Korea). Contents.
E N D
August 26-28, 2004 The 3rd International Workshop on HEP Data Grid HEP Data Grid in Korea Kihyeon Cho Center for High Energy Physics Kyungpook National University and Hyoungwoo Park Supercomputing Center, KISTI (On behalf of the HEP Data Grid Working Group in Korea)
Contents • Goal of Korea HEP Data Grid • Regional Data Center for CMS • Network • Storage • Computing • Application for HEP Data Grid • CDF • Belle • AMS • Conclusions
HPSS HPSS HPSS HPSS HPSS HPSS Tier2 Center Tier2 Center Tier2 Center Tier2 Center Tier2 Center Goal of Korea HEP Data Grid ~100 MBytes/sec event simulation Online System Tier 0 +1 eventreconstruction human=2m HPSS CMS detector: 15m X 15m X 22m 12,500 tons, $700M. ~2.5 Gbits/sec Tier 1 German Regional Center FermiLab, USA Regional Center French Regional Center CHEP, Korea Regional Center Italian Center ~0.6-2.5 Gbps analysis Tier 2 ~0.6-2.5 Gbps Tier 3 CERN/CMS data goes to 6-8 Tier 1 regional centers, and from each of these to 6-10 Tier 2 centers. Physicists work on analysis “channels” at 135 institutes. Each institute has ~10 physicists working on one or more channels. 2000 physicists in 31 countries are involved in this 20-year experiment in which CERN/DOE are major players. Institute ~0.25TIPS Institute Institute Institute Physics data cache 100 - 1000 Mbits/sec Courtesy Harvey Newman, CalTech and CERN Tier 4 Workstations
Korea • To make Tier1 Regional Data Center for LHC-CMS Experiment. • Other Experiments (CDF, Belle, AMS, PHENIX) may use it.
Korea HEP Data Grid Activities • History • 2001.10, Organized working group for HEP Data Grid • 2002. 1, Access Grid workshop • 2002.11, 1st international workshop on HEP Data Grid • 2003. 8, 2nd international workshop on HEP Data Grid • 2004. 8, 3rd international workshop on HEP Data Grid • Plan • 2005. 5, Networking, HEP Grid andDigital divide workshop • Working Group Activities • HEP Data Grid working Group • Advance Network Forum HEP working group • APII/TEIN physics working Group
Experiments for HEP Data Grid Project in Korea Space Station (AMS) US FNAL (CDF) Korea CHEP at CERN USBNL (PHENIX) EuropeCERN (CMS) Korea CHEP Regional Data Center JapanKEK (Belle)
This year’s Activities 1. Regional Data Center for CMS • Network => See Kihwan Kwon’s talk • Storage – SRB connectivity with KISTI and KBSI • Computing - Grid3+, LCG 2. Application for HEP Data Grid • CDF • Belle =>See Youngjoon Kwon’s talk • AMS => See Guinyun Kim’s talk
SRB (Storage Resource Broker) • SRB(Storage Resource Broker) • To provide homogeneous interface atheterogeneous data resource. • To access dataset easily and provide copy and store • To handle metadata using MCAT (Metadata Catalog) • Connection • KISTI, KBSI and CHEP • To connect KEK and CHEP
SRBconnection between KISTI and KBSI • SRB for HPSS at KISTI • SRB for cluster at KNU • Can connect KBSI server using MCAT • To install ZoneSRB (SRB3.1) at cluster38
Grid3+ • 2003: Demo at SC2003. • 28 Sites(1 Korea, 27 US, ~2000CPUs) • VO: USCMS • 2004: Number of CPUs : 3 CPUs -> 84CPUs • Dedicated CPU:3 • Dual batch system of CMS(28CPU), DCAF(53CPU) • Middleware is up-to-date: • VDT upgrade completed on July 15 (VDT1.11->1.14)
Grid3+ at KNU site [root@cluster28 root]# condor_q -- Submitter: cluster28.knu.ac.kr : <155.230.20.58:38142> : cluster28.knu.ac.kr ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 36835.0 uscms02 6/9 03:48 1+10:06:33 R 0 3.7 data 38914.0 uscms02 6/24 03:38 0+17:09:52 R 0 3.7 data 42606.0 btev 7/16 13:47 0+16:10:51 R 0 5.3 data -t 313 12024 44323.0 uscms02 7/22 08:53 0+01:44:15 R 0 3.6 data 44395.0 ivdgl 7/22 09:15 0+01:35:48 R 0 5.3 data -t 159 11053 44405.0 lsc01 7/22 09:17 0+01:28:55 R 0 5.3 data -t 305 11041 44407.0 lsc01 7/22 09:17 0+01:18:13 R 0 5.3 data -t 307 11043 44424.0 ivdgl 7/22 09:18 0+01:31:20 R 0 5.3 data -t 265 12066 44427.0 ivdgl 7/22 09:19 0+01:28:20 R 0 5.3 data -t 268 12069 44429.0 usatlas1 7/22 09:20 0+01:38:06 R 0 5.3 data -t 313 12049 44430.0 usatlas1 7/22 09:20 0+01:34:08 R 0 5.3 data -t 314 12050 44432.0 usatlas1 7/22 09:20 0+01:39:57 R 0 5.3 data -t 291 12052 44433.0 usatlas1 7/22 09:20 0+01:35:28 R 0 5.3 data -t 290 12051 44434.0 usatlas1 7/22 09:20 0+01:31:07 R 0 5.3 data -t 292 12053 44435.0 usatlas1 7/22 09:20 0+01:26:20 R 0 5.3 data -t 293 12054 44436.0 usatlas1 7/22 09:20 0+01:33:07 R 0 5.3 data -t 294 12055 44437.0 usatlas1 7/22 09:20 0+01:28:27 R 0 5.3 data -t 295 12056 44756.0 uscms02 7/22 18:16 0+00:08:59 R 0 0.0 data 44757.0 uscms02 7/22 18:22 0+00:07:20 R 0 0.0 data 44758.0 uscms02 7/22 18:23 0+00:06:58 R 0 0.0 data 20 jobs; 0 idle, 20 running, 0 held
LCG (LHC Computing Grid) • To join LCG • Installing LCG2 test bed • Making Korean HEP CA
LCG Interoperability • Korea CMS is trying to allow transparent use of KNU resources through LCG and Grid3+. • Interoperating grid resources can be approached in a number of ways.
Testbed iVDGL Korea CMS Grid CMS Grid Grid3+ Batch Q LCG2 Data Challenge (Fermilab-CHEP) CERN Tier0 CERN LHC
CE Condor-G LSF Batch Q WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node Korea CMS Testbed Grid3 CE LCG CE PBS WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node WN Node Copy from USCMS Testbed WN Node WN Node
Grid Testbed at CHEP LCG Testbed • Connecting amongKNU, KT, KISTI and SKKU • To use all work nodes as LCG work nodes
CDF Grid Production Farm Ref. Mark Neubauer
CDF SAMGrid DCAF (DeCentralized Analysis Farm) CAF(Central Analysis Farm) Why DCAF and CDF Grid? CDF Grid • Requirement set by goal 2005 • 200 simultaneous users should analyze 107 events in a day. • Need ~700 TB of disk and ~5THz of CPU by end of FY’05 DCAF (DeCentralized Analysis Farm) • SAM (Sequential Access through Meta data) • To handle real data at Fermilab for DCAF around world • Gridification of DCAF via SAM Grid • KorCAF (DeCenteralized Analysis Farm in Korea)
Scheme of DCAF Fermilab (CAF) SAM KNU(KorCAF) Grid INFN(Italy) INFN users only Tawian users • Korea, Taiwan and Italy already is working for a User MC Farm
Uses SAM Uses SAM Only Fermilab Grid Outside Lab User Perspective Oct. 2004, JIM will deployed
User Perspective CAF Gui DCAF Gui Grid Korea Toronto Italy Taiwan FermiCAF UK
CDF Computing Plan CPU Disk July FNAL FNAL 04 Dec 04 FNAL FNAL
Summary for CDF Grid • CDF can capture more resources using the Grid to achieve its physics mission. • DCAF and SAM is working for CDF and will reduce operational loads. • We add new features and rely on software supported for or by the LHC.
Korea Belle Grid • Globus connection among SKKU, KISTI and KNU • To construct SRB with KEK and KNU • To connect between KNU-KEK and Australia
Direct QQ Simulation File control and transfer GEANT Simulation Geant Simulation Analysis Analysis Data Storages Data Storages Korea Belle with KEK * KNU-KEK Belle Computing Farm KNU & SKKU KEK Belle KISTI site Globus Environment SRB Plan Compared between Grid Testbed and KEK testbed using B0->J/ψKs (Sunmin Kim’s M.S.’s thesis)
Event Display at Grid Tool Kit * GSIM event Display in GT environment 1 2 1. Access to GT environment 2. Interactive executable under GT env.
Summary for Korea Belle • Korea Belle Grid system is operating at KNU, SKKU and KISTI testbed • SC2003 demo – KorBelle • To install SRB between KISTI-KNU-KEK • Korea-KEK-Australia • On Oct 2003, PRAGMA – Data Grid Working Group has discussed basic understanding on how KorBelle and Australian Belle Grid work together. • To work with KEK about Korea-Japan Data Grid and Australia based on network and SRB More information on Prof. Youngjoon Kwon’s talk
AMS Grid • Using Grid environment between KNU and KT • Testing file transfer using bbftp between CHEP-CERN and CHEP - Zurich • OpenSSI (Single System Image) => Science DataCenter
File Transfer at AMS {chep18.knu.ac.kr:ams machine} uploading files to CERN via bbftp : bbftp -i list_root.*** -u 'jwshin' pcamsf2.cern.ch bbftp -i list_jou.*** -u 'jwshin' pcamsf2.cern.ch bbftp -i list_log.*** -u 'jwshin' pcamsf2.cern.ch {speed : 1~9 MB/s} {chep18.knu.ac.kr:ams machine} 2004JUN1-1/ams02mcscripts.tar.gz gen_jobs.pl ams02mcscripts.tar.gz knu.***1.{PART#}.*.job knu.***2.{PART#}.*.job knu.***3.{PART#}.*.job knu.***4.{PART#}.*.job run.***.1.sh -> for submitting jobs to Queuing Server run.***.2.sh run.***.3.sh run.***.4.sh list_jou.*** -> control file will be used by bbftp list_log.*** list_root.*** after running,the gen_jobs.pl will check files and update list_jou,list_root,list_log . CERN DB More information on Prof. Guinyun Kim’s talk gen_jobs.pl (perl script)
Conclusions • To participate SC2004 Bandwidth Challenge with Caltech • Installing LCG2 and to join LCG • To apply SRB between KISTI, KBSI, KNU, KEK • The application of CDF, Belle, AMS Grid -> SC2004