320 likes | 514 Views
CHEP as an AMS Regional Center. G. N. Kim , J. W. Shin, N. Tasneem, M. W. Lee, D. Son. (Center for High Energy Physics, KNU). H. Park. (Supercomputing Center, KISTI). The Third International Workshop on HEP Data Grid. CHEP, KNU 2004. 8. 26-28. Outline. AMS Experiment
E N D
CHEP as an AMS Regional Center G. N. Kim, J. W. Shin, N. Tasneem, M. W. Lee,D. Son (Center for High Energy Physics, KNU) H. Park (Supercomputing Center, KISTI) The Third International Workshop on HEP Data Grid CHEP, KNU 2004. 8. 26-28
Outline • AMS Experiment • Data flow from ISS to AMS Remote Center • Data Size for AMS Experiment for 3 years • AMS Science Operating Center • Status of CHEP as an AMS Regional Center • Status of MC Data Production • bbFTP test • Summary
AMS(Alpha Magnetic Spectrometer) Experiment • A Particle Physics Experiment on the International Space Station for 3 or 4 Years • Will Collect ~1010 Cosmic Rays in Near-Earth Orbit from 300 MeV to 3 TeV PHYSICS GOALS : • To search for Antimatter (He,C) in space with a sensitivity of 103 to 104 better than current limits. • To search for Dark matter High statistics precision measurements of e, and p spectrum. • To study Astrophysics. High statistics precision measurements of D, 3He, 4He, B, C, 9Be, 10Be spectrum • B/C: to understand CR propagation in the Galaxy (parameters of galactic wind). • 10Be/9Be: to determine CR confinement time in the Galaxy.
International Collaboration~200 scientists + dozens of contractors from 14 countries Spokesperson: TING, Samuel C. C.
Commercial Satellite Service Space Station Five Downlink Video Feeds & Telemetry Backup Remote User Facility Internet 0.9 meter dish MSFC POIC Telemetry Voice Planning Commanding JSC Distribute Locally & Conduct Science Or A Dedicated Service/Circuit Data Flowfrom ISS to Remote AMS Centers White Sands, NM
ACOP: AMS Crew Operations Post, POIC:Payload Operation Integration Center, GSC: Ground Support Computers
Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration 0.4 3 years flight Scientific 33 –45 3 years flight Calibration 0.03 3 years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files 165-260 Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5 Table. AMS02 data transmitted to SOC from POIC Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Table. AMS02 Data Volumes Total AMS02 data volume is about 200 TB
AMS Science Operations Center (SOC) • Data processing and Science Analysis • receive the complete copy of data • science analysis • primary data storage • data archiving • data distribution to AMS Universities and Laboratories • MC production The SOC will provide all functioning and will give the possibility to process and analyze all data. SOC computing facilities should be enough to provide data access and analysis for all members of the collaboration.
Data Servers, AMS Physics Services Analysis Facilities (linux cluster) N Central Data Services Shared Tape Servers tape robots tape drives LTO, DLT AMS regional Centers Interactive and Batch physics analysis 10-20 dual processor PCs 5 PC servers Production Facilities,40-50 Linux dual-CPUcomputers Shared Disk Servers 25 TeraByte disk 6 PC based servers Linux, Intel and AMD batch data processing Home directories& registry EngineeringCluster interactive physics analysis 5 dual processor PCs consoles &monitors CERN/AMS Network Science Operations Center Computing Facilities
Production Farm Cell #7 PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz PC Linux 3.4+GHz Archiving and Staging (CERN CASTOR) Gigabit Switch AFS Server PC Linux Server 2x3.4+GHz, RAID 5 ,10TB Cell #1 Gigabit Switch (1 Gbit/sec) Web, News Production, DB servers MC Data Server AMS data NASA data metadata Disk Server Disk Server Disk Server Disk Server PC Linux Server 2x3.4+GHz PC Linux Server 2x3.4+GHz, RAID 5 ,10TB Data Server Simulated data Tested, prototype in production Not tested and no prototype yet Analysis Facilities AMS Science Operation Center Computing Facilities
AMS Regional Center(s) • Monte-Carlo Production • Data Storage and data access for the remote stations • AMS Regional Stations(s) and Center(s) • Access to the SOC data storage • * for the detector verification studies • * for detector calibration purposes • * for alignment • * event visualization • Access to the SOC to get the detector and production status • Access to SOC computing facilities for Science Analysis • Science Analysis using local computing facilities of • Universities and Laboratories.
Connectivity from ISS to CHEP (RC) International Space Station (ISS) 무궁화위성 Commercial Satellite Service CHEP ISP-1 ISP-2 White Sands, NM CERN (SOC) Chicago NAP Five Downlink Video Feeds & Telemetry Backup vBNS MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NINS B4207 ISP-3 vBNS MIT JSC ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NINS - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center RC RS
CHEP as an AMS Regional Center Internet Server Ewha 무궁화위성 Data Storage Display Facility AMS RC Disk Storage (20TB) Analysis Facility Tape Library DB Server (~200 TB) Hub Gigabit Ethernet Linux Clusters Cluster Servers AMS RS
Hub Gigabit Ethernet Linux Clusters AMS Cluster Servers AMS Analysis Facility at CHEP • CHEP • AMS Cluster Server: 1 • CPU: AMD Athlon MP 2000+ dual • Disk Space: 80 GB • OS: Redhat 9.0 • Linux Clusters : 12 • CPU: AMD Athlon MP 2000+ dual • Disk Space: 80 GB x 12 =960 GB • OS : Redhat 7.3 2) KT AMS Cluster Server: 1 CPU: Intel XEON 2GHz dual Disk Space: 80 GB OS: Redhat 9.0 Linux Clusters : 4 CPU: Intel XEON 2GHz dual Disk Space: 80 GB x 4 =320 GB OS:Redhat 7.3
43.7 TB S10 L12 S10 L12 TapeLibrary Intermediate DISKPOOL Login Machine C C C C C C C C C C C C Data Storage • IBM TAPE LIBRARY SYSTEM – 43.7 TB • 3494-L12 8.4 TB • 3494-S10 13.537 TB • 3494-L12 7.36 TB • 3494-S10 14.4 TB • Raid Disks • Fast T200: 1 TB (Raid 0:striping) experimental group FastT200 RAID Disks CDF, CMS, AMS, BELLE, PHENIX file system Migration NFS cluster
L3 Switch Network Configuration APII/KREONET2 (US) APII(Japan) 8Mbps CERN KEK Fermilab GEANT-TEIN(EU) Research Traffics 622Mbps x 2 34 Mbps 1 Gbps x 2 KREONET KOREN 2.5 Gbps other Traffics (total 145 Mbps) KORNET Boranet Gigabit Ethernet C6509 Gigabit Ethernet/ATM155 Gigabit Ethernet (Physics Department) Gigabit Switches (CHEP) IBM 8271 PCs … … Servers Servers
Connectivity to the Outside from CHEP KOREN Topology USA StarTap, ESNET APII China (IHEP) Seoul APII USA (Chicago) 2.5 Gbps TEIN (CERN) Daejeon APII JAPAN (KEK) Daegu CHEP (Regional Center) Daegu ■ Singapore (1) SIngAREN through APII (2Mbps) ■ China (1, Preplanned) CSTNET (APII) thru APII
US FNAL APII-TransPac EuropeCERN TEIN Hyunhai/Genkai Connectivity to the Outside from Korea StarTap
MC Production Procedure by Remote Client Step:1 Write here registered info. Step:2 Click here! Step:3 Choose any one from datasets. Step:4 Choose appropriate dataset.
MC Production Procedure by Remote Client Step:5 Choose appropriate CPU Type & CPU clock. Step:7 Put total number of jobs requested Step:6 Put appropriate CPU time limit. Step:9 Choose MC production Mode. Step:8 Put ‘Total Real Time Required’. Step:10 Click On ‘Submit Request’
MC Production Statistics 185 days, 1196 computers 8.4 TB, 250 PIII 1 GHz/day URL: pcamss0.cern.ch/mm.html
MC Events Production at CHEP Total Number of Requested Job : 229 Completed Job : 218 Failed Job : 1 Unchecked Job (may be error) : 10
Data Handling Program: bbFTP • bbFTP is a file transfer software developed by Gilles Farrache in Lyon • Encoded username and password at connection • SSH and Certificate authentification modules • Multi-stream transfer • Big windows as defined in RFC1323 • On-the-fly data compression • Automatic retry • Customizable time-outs • Transfer simulation • AFS authentification integration • RFIO interface http://ccweb.in2p3.fr/bbftp/
Data Transmission Test using bbFTP • Using TEIN in Dec. 2002 • - for a number of TCP/IP streams • - for a file size
Using APII/KREONET2-Startap in Aug. 2004 • - for a number of TCP/IP streams • - for a file size
Summary • CHEP as AMS Regional Center • Prepared an Analysis Facility and Data Storage • Producing MC events • Progress in MC production with GRID Tools