200 likes | 412 Views
CHEP as an AMS Remote Data Center. G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook) J. Yang (Ewha), Jysoo Lee (KISTI). International HEP DataGrid Workshop. CHEP, KNU 2002. 11. 8-9. AMS (Alpha Magnetic Spectrometer).
E N D
CHEP as an AMS Remote Data Center G.N. Kim, H.B. Park, K.H. Cho, S. Ro, Y.D. Oh, D. Son (Kyungpook) J. Yang (Ewha), Jysoo Lee (KISTI) International HEP DataGrid Workshop CHEP, KNU 2002. 11. 8-9
AMS (Alpha Magnetic Spectrometer) • High Energy Experiment in the International Space Station (ISS) • Installation of AMS detector at ISS in 2005 and run for 3 or 4 years PHYSICS GOALS : • To search for Antimatter (He,C) in space with a sensitivity of 103 to 104 better than current limits(<1.110-6). • To search for dark matter High statistics precision measurements of e, and p spectrum. • To study Astrophysics. High statistics precision measurements of D, 3He, 4He, B, C, 9Be, 10Be spectrum • B/C: to understand CR propagation in the Galaxy (parameters of galactic wind). • 10Be/9Be: to determine CR confinement time in the Galaxy.
AMS 02 on ISS for 3 years AMS 02 In Cargo Bay
Data Flowfrom ISS to Remote AMS Centers AMS Crew Operation Post
Commercial Satellite Service Space Station Five Downlink Video Feeds & Telemetry Backup Remote User Facility Internet 0.9 meter dish MSFC POIC Telemetry Voice Planning Commanding JSC Distribute Locally & Conduct Science Or A Dedicated Service/Circuit Data Flowfrom ISS to Remote AMS Centers White Sands, NM
White Sand, NM facility AMS Real-time, “Dump”, & White Sand’s LOR playback Payload Operations Control Center Real-time & “Dump” data Monitoring& science data MSFC, Al External Communications Real-time & “dump” Payload Data Service system Real-time Data H&S H&S Monitoring Science Flight ancillary data ACOP Short Term Long Term Stored data GSC Science Operations Center NearReal-time File transfer High Rate Frame MUX playback Telescience centers NASA Ground Infrastructure Remote AMS Sites ISS Data Flowfrom ISS to Remote AMS Centers
RT data Commanding Monitoring NRT Analysis POIC@MSFC AL POCC POCC HOSC Web Server and xterm XTerm commands commands archive Monitoring, H&S data Flight Ancillary data AMS science data (selected) TReK WS “voice”loop Science Center Science Operations Center TReK WS Video distribution External Communications PC Farm GSC GSC AMS Data, NASA data, metadata NRT Data Processing Primary storage Archiving Distribution Science Analysis Buffer data Retransmit To SDC GSC Production Farm MC production D S A e T r A v e r AMS Remote center Analysis Facilities CHEP Data Server MC production Data mirror archiving Analysis Facilities AMS Station AMS Remote Station AMS Remote Station AMS Ground Centers Internet Internet
AMS Science Data Center (SDC) • Data processing and Science Analysis • receive the complete copy of data • science analysis • primary data storage • data archiving • data distribution to AMS Universities and Laboratories • MC production The SDC will provide all functioning and will give the possibility to process and analyze all data. SDC computing facilities should be enough to provide data access and analysis for all members of the collaboration.
Data Processing Farm of SDC • A farm of Pentiums (AMD) based systems running • Linux is proposed. Depending on the processor clock • speed the farm will contain 25 to 30 nodes. • Processing node: • * Processor:Dual-CPU 1.5+GHz or single-CPU 2+GHz Pentium/AMD • * Memory: 1 GB RAM • * Mother Board Chip Set: Intel or AMD • * Disk: EIDE 0.5-1.0 Tbyte, 3-ware Escalade Raid controller • * Ethernet Adapter: 3x100 Mbit/s - 1Gbit/s • * Linux OS • Server node: • * dual-CPU 1.5+GHz Pentium/AMD • * 2 Gbyte RAM • * 3 Tbyte of disk space with SCSI UW RAID external Tower • * 3x100 Mbit/s (or 1Gbit/s) network controllers • * Linux OS
Analysis Chain: Farms event filter (selection & reconstruction) processed data event summary data raw data batch physics analysis event reconstruction analysis objects (extracted by physics topic) event simulation interactive physics analysis
Origin Data Category Volume (TB) Beam Calibrations Calibration 0.3 Preflight Tests Calibration 0.4 3 years flight Scientific 33 –45 3 years flight Calibration 0.03 3 years flight House Keeping 0.18 Data Summary Files (DST) Ntuples or ROOT files 165-260 Catalogs Flat files or ROOT files or ORACLE 0.05 Event Tages Flat files or ROOT files or ORACLE 0.2 TDV files Flat files or ROOT files or ORACLE 0.5 Table. AMS02 data transmitted to SDC from POIC Stream Band Width Data Category Volume (TB/year) High Rate 3-4 Mbit/s Scientific 11 – 15 Calibration 0.01 Slow Rate 16 kbit/s Housing Keeping 0.06 NASA Auxillary Data 0.01 Table. AMS02 Data Volumes Total AMS02 data volume is about 200 Tbyte
Data Storage of SDC Purposes: - Detector verification studies; - Calibration - Alignment - Event visualization - Data processing by the general reconstruction program - Data reprocessing Requirement: - Tag information for all events during the whole period of data taking must be kept on direct access disks - Raw data taken during last 9 months and 30 % of all ESD should be on direct access disks 20TB - All taken and reconstructed data must be archived 200TB
Machine A Machine B Server A Server A Server B Server B Database A Database A Task A Task A Task B Task B Task C Task C Database B Database B ORACLE Data Base Organization • Organization of database by machine, server, database, table • - flexibility to load, locking data volume • - the loading of machines A and B should be balanced • most probably both machines will be LINUX Pentiums • backup and replication of database
AMS Remote Center(s) • Monte-Carlo Production • Data Storage and data access for the remote stations • AMS Remote Stations(s) and Center(s) • Access to the SDC data storage • * for the detector verification studies • * for detector calibration purposes • * for alignment • * event visualization • Access to the SDC to get the detector and production status • Access to SDC computing facilities for Science Analysis • Science Analysis using local computing facilities of • Universities and Laboratories.
Nominal Tables Hosts, Interfaces Producers, Servers… {II} {I} {VI} • {I} submit 1st server • {II} “cold” start • {III} read “active” tables (available hosts, number of servers, producers, jobs/host) • {IV} submit servers • {V} get “run”info (runs to be processed, ESD output path) • {VI} submit producers (LILO, LIRO,RIRO…) • Notify servers server server producers {III} ESD Active Tables : Hosts, Interfaces, Producers, Servers {IV} ESD Raw data server server {V} {VI} Conditions DB server server server Tag DB server server producers ESD Catalogues ESD ESD Oracle RDBMS Raw data AMS Data Production Flow
Connectivity from AMS02 on ISS to CHEP International Space Station (ISS) 무궁화위성 Commercial Satellite Service CHEP ISP-1 ISP-2 White Sands, NM CERN (SDC) Chicago NAP Five Downlink Video Feeds & Telemetry Backup vBNS MSFC POIC/ POCC Telemetry Voice Planning Commanding MIT LANs NISN B4207 ISP-3 vBNS MIT JSC ISP :Internet Service Provider NAP - Network Access Point vBNS - very high Broadband Network Service NISN - NASA Integrated Network Services POCC - Payload Operations Control Center POIC- Payload Operations Integration Center RC RS
CHEP Internet Server Ewha 무궁화위성 Data Storage (20-200 TB) Display Facility AMS RC Disk Storage (20TB) Analysis Facility 200 cpu Tape Library DB Server (~200 TB) Hub Gigabit Ethernet Linux Clusters Cluster Servers AMS RS
KORNET Boranet KOREN KREONET L3 Switch IBM 8271 … PCs Network Configuration (July-Aug, 2002) APII(Japan) 8Mbps (2002) GEANT-TEIN(EU) 10~45Mbps (2002) 10 Gbps APII(US) 45Mbps (2002) CERN KEK Fermilab Research Traffics 45Mbps 1 Gbps other Traffics (total 145 Mbps) Gigabit Ethernet C6509 Gigabit Ethernet/ATM155 Gigabit Ethernet (Physics Department) Gigabit Switches(CHEP) … Servers Servers
Connectivity to the Outside from CHEP KOREN Topology USA StarTap, ESNET APII China (IHEP) Seoul APII USA (Chicago) 1 Gbps TEIN (CERN) Daejeon APII JAPAN (KEK) Daegu CHEP (Regional Center) Daegu ■ Singapore (1) SIngAREN through APII (2Mbps) ■ China (1, Preplanned) CSTNET (APII) thru APII
US FNAL APII-TransPac EuropeCERN TEIN Hyunhai/Genkai