220 likes | 352 Views
Systems in AMS02 AMS TIM @CERN , July 2003. Computing and Ground. Alexei Klimentov — Alexei.Klimentov@cern.ch. MIT. Outline. AMS02 Data Flow AMS02 Ground Centers Science Operation Center Architecture choice of HW, cost estimation, implementation plan
E N D
Systems in AMS02AMS TIM @CERN , July 2003 Computing and Ground Alexei Klimentov — Alexei.Klimentov@cern.ch MIT
Outline • AMS02 Data Flow • AMS02 Ground Centers • Science Operation Center Architecture choice of HW, cost estimation, implementation plan • Data Transmission SW • TReK SW Alexei Klimentov. AMS TIM. July 2003.
ISS to Remote AMS Centers Data Flow , , , Buffered data AMS Science Operations Center ACOP Commanding, Monitoring, Online analysis White Sand, NM Facility RealTime,“Dump” & WhiteSands LOR playback Real-Time & “Dump” data AMS Payload Operations Control Center Real-Time & “Dump” data Payload Data Service System Monitoring & Science data Event reconstruction, batch & interactive Physics analysis Data archiving Real-Time H&S data Buffering before transmission External Communications POIC Long-Term Short-Term NearReal-Time &“Dump” data Payload Operation & Integration Center AMS GSC FileTransfer Stored data playback High Rate Frame MUX Marshall Space Flight Center, AL FileTransfer ISS NASA’s Ground Infrastructure AMS Regional Centers Alexei Klimentov. AMS TIM. July 2003.
AMS Ground Centers (Ground Support Computers) • At Marshall Space Flight Center (MSFC), Huntsville Al • Receives monitoring and science data from NASA Payload Operation and Integration Center (POIC) • Buffers data until retransmission to the AMS Science Operation Center (SOC) and if necessary to AMS Payload Operations and Control Center (POCC) • Runs unattended 24h/day, 7 days/week • Must buffer about 600 GB (data for 2 weeks) Alexei Klimentov. AMS TIM. July 2003.
AMS Ground Centers (Payload Operation and Control Center) • AMS02 “Counting Room” • Usual source of AMS commands • Receives H&S, monitoring, science and NASA data in real-time mode • Monitor the detector state and performance • Process about 10% of data in near real time mode to provide fast information to the shift taker • Video distribution “box” • Voice loops with NASA • Computing Facilities Primary and backup commanding stations Detector and subdetectors monitoring stations Stations for event display and subdetectors status displays Linux servers for online data processing and validation Alexei Klimentov. AMS TIM. July 2003.
AMS Ground Centers (Science Operation Center) • Receives the complete copy of ALL data • Data reconstruction, calibration, alignment and processing, generates event summary data and does event classification • Science analysis • Archive and record ALL raw, reconstructed and H&S data • Data distribution to AMS Universities and Laboratories Alexei Klimentov. AMS TIM. July 2003.
(Regional Centers) AMS Ground Centers • Analysis facility to support physicists from geographically closed AMS Universities and Laboratories; • Monte-Carlo Production; • Provide access to SOC data storage (event visualisation, detector and data production status, samples of data , video distribution); • Mirroring AMS DST/ESD. Alexei Klimentov. AMS TIM. July 2003.
AMS Data Volume (Tbytes) STS91 ISS Alexei Klimentov. AMS TIM. July 2003.
Symmetric MultiProcessor Model Experiment Tape Storage TeraBytes of disks Alexei Klimentov. AMS TIM. July 2003.
Scalable model Disk & Tape Storage TeraBytes of disks Alexei Klimentov. AMS TIM. July 2003.
AMS02 Benchmarks 1) Executive time of AMS “standard” job compare to CPU clock 1) V.Choutko, A.Klimentov AMS note 2001-11-01 Alexei Klimentov. AMS TIM. July 2003.
AMS SOC (Data Production requirements) • Reliability – High (24h/day, 7days/week) • Performance goal – process data “quasi-online” (with typical delay < 1 day) • Disk Space – 12 months data “online” • Minimal human intervention (automatic data handling, job control and book-keeping) • System stability – months • Scalability • Price/Performance Complex system that consists of computing components including I/O nodes, worker nodes, data storage and networking switches. It should perform as a single system. Requirements : Alexei Klimentov. AMS TIM. July 2003.
AMS Physics Services N Data Servers, Production Facilities,40-50 Linux dual-CPUcomputers Home directories& registry EngineeringCluster Linux, Intel and AMD 5 dual processor PCs consoles &monitors AMS Science Center Computing Facilities Analysis Facilities (linux cluster) Central Data Services Shared Tape Servers AMS regional Centers Interactive and Batch physics analysis tape robots tape drives LTO, DLT 10-20 dual processor PCs 5 PC servers Shared Disk Servers 25 TeraByte disk 6 PC based servers batch data processing interactive physics analysis CERN/AMS Network Alexei Klimentov. AMS TIM. July 2003.
AMS Computing facilities (disks and cpus projected characteristics) Alexei Klimentov. AMS TIM. July 2003.
AMS02 Computing Facilities Y2000-2005 (cost estimate) Alexei Klimentov. AMS TIM. July 2003.
AMS Computing facilities (implementation plan) Alexei Klimentov. AMS TIM. July 2003.
CERN’s Network Connections SWITCH National Research Networks RENATER 1Gb/s Mission Oriented Link 2Mb/s IN2P3 155Mb/s 45Mb/s TEN-155: Trans-European Network at 155Mb/s WHO 39/155 Mb/s TEN-155 CERN Public 2x255Mb/s KPNQwest (US) 1Gb/s Commercial C-IXP Alexei Klimentov. AMS TIM. July 2003.
TEN-155 IN2P3 CERN 40 Mb/s Out 38 Mb/s In RENATER SWITCH CERN’s Network Traffic Incoming data rate 4.7Mb/s 5.5Mb/s 5.2Mb/s 40Mb/s 14Mb/s KPNQwest(US) 45Mb/s 2x255Mb/s 20Mb/s 25Mb/s Outgoing data rate Link Bandwidth 0.1Mb/s 0.1Mb/s 100Mb/s 2Mb/s CERN : ~36 TB/month in/out AMS Raw data 0.66 TB/month = 2 Mb/s 1Mb/s = 11GB/day Alexei Klimentov. AMS TIM. July 2003.
Data Transmission • Will AMS need a dedicated line to send data from MSFC to ground centers or the public Internet can be used ? • What Software (SW) must be used for a bulk data transfer and how reliable is it ? • What data transfer performance can be achieved ? High Rate Data Transfer between MSFC Al and POCC/SOC, POCC and SOC, SOC and Regional centers will become a paramount importance Alexei Klimentov. AMS TIM. July 2003.
Data Transmission SW 1) Why not FileTransferProtocol (ftp) or ncftp , etc ? to speed up data transfer to encrypt sensitive data and not encrypt bulk data to run in batch mode with automatic retry in case of failure • … starting to look around and came up with bbftp in September 2001 (still looking for a good network monitoring tools) (bbftp developed in BaBar and used to transmit data from SLAC to IN2P3@Lyon) adapted it for AMS, wrote service and control programs A.Elin, A.Klimentov AMS note 2001-11-02 P.Fisher, A.Klimentov AMS Note 2001-05-02 Alexei Klimentov. AMS TIM. July 2003.
Data Transmission SW (tests) Alexei Klimentov. AMS TIM. July 2003.
Data Transmission Tests (conclusions) • In its current configuration Internet provides sufficient bandwidth to transmit AMS data from MSFC Al to AMS ground centers at rate approaching 9.5 Mbit/sec • bbftp is able to transfer and store data on a high end PC reliably with no data loss • bbftp performance is comparable of what achieved with network monitoring tools • bbftp can be used to transmit data simultaneously to multiple cites Alexei Klimentov. AMS TIM. July 2003.