1 / 17

VHE g-ray Astronomy and Data Handling at MAGIC

Explore the world of very high energy gamma-ray astronomy with the MAGIC telescope, the largest Cherenkov telescope in operation. Learn about data handling techniques and the use of GRID technology in analyzing and processing the vast amount of data collected. Discover the fascinating astrophysics and fundamental physics aspects of VHE gamma-ray astronomy.

cknipp
Download Presentation

VHE g-ray Astronomy and Data Handling at MAGIC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction: VHE g-ray astronomy and MAGIC • Data handling at MAGIC • GRID at MAGIC • Virtual observatory • Conclusions Summary

  2. MAGIC is a Cherenkov telescope (system) devoted to the study of the most energetic electromagnetic radiation, i.e. very high energy (VHE, E > 100 GeV) g-rays • VHE g-rays are produced in non-thermal violent processes in the most extreme environments in our Universe • Astrophysics of latest stellar stages, AGN’s, GRB’s • Fundamental physics VHE astronomy SNRs mQSRs AGNs GRBs Pulsars Cosmology Quantum Gravity Origin of CRs Dark Matter

  3. MAGIC • MAGIC is currently the largest-dish Cherenkov telescope in operation (17 m diameter) • Located in the Observatorio del Roque de los Muchachos on Canary Island of La Palma (Spain) • Run by international Collaboration of ~150 physicists from Germany, Spain, Italy, Switzerland, Poland, Armenia, Finland, Bulgaria • In operation since fall 2004 (about to finish 3rd observation cycle) • 2nd telescope (MAGIC-II) to be inaugurated on September 21st 2008

  4. IMAGING A segmented PMT camera (577/1039 channels for the first/second telescopes) allows to image Cherenkov showers

  5. Number of camera pixels: n Digitization samples: s Precision: p Event rate: R Data volume rate = R × n × s × p Raw data volume 1: One telescope, 300 MHz digitization system: Oct 2004-Dec 2006 2: One telescope 2 GHz digitization system: Jan 2007-Sep 2008 3: Two telescopes 2 GHz digitization system: Sep 2008

  6. DAQ Fast analysis User Calibrate Reduce La Palma PIC, Barcelona FTP/Mail Raw data Raw data 200 TB/yr Calib. Data 20 TB/yr Reduced data 2 TB/yr FTP Data flow Reduced data Reduced data Starting in September 2008 MAGIC data center hosted at PIC, Barcelona (tier-I center) In test-phase since 1 year already Provides: automatic data transfer from La Palma tape storage of raw data automatic data analysis access to latest year calibrated and reduced data cpu and disk buffer for data analysis database

  7. DATA center disk needs: MAGIC/PIC Data center + unlimited tape storage capacity • Already running system consists of: • 25 TByte disk space (ramp-up to final 72 TByte foreseen for next September within schedule) • LTO3/LTO4 tape storage and I/O with robots • ~20 × CPU (2 GHz) for data processing and analysis • automatization of data tranfer/process/analysis • database • WEB access

  8. Philosophy: • Adopt Grid to allow MAGIC users to do better science • “If it ain’t broken, don’t fix it” • Leverage worldwide mutual trust agreement for Grid certificates to simplify user ID management for: • Interactive login (ssh → gsi-ssh or equivalent) • Casual file transfer (https via Apache+mod_gridsite or gridftp) • Move to batch submission via Grid tools in order to unify CPU accounting with LHC • Setup the Grid utility “reliable File Transfer Service” to automate file distribution between MAGIC Datacenter @ PIC and sites which regularly subscribe many datasets • PIC/IFAE will have specific resources to help with this, partially thanks to funding from the EGEE-III project • Integrate into the procedure for opening an account in the Datacenter the additional steps for a user to get a Grid certificate and to be included as member of the MAGIC VO. Trends foreseen for 2008

  9. The recorded data are mainly background events due to charged cosmic rays (CR) • Background rejection needs large samples of Monte Carlo simulated g-ray and CR showers • Very CPU consuming (1 night of background > 106 computer days ) • Access to simulated samples, MC production coordination, scalability (MAGIC II, CTA...) GRID can help with these issues Monte Carlo simulation

  10. The idea • H. Kornmayer (Karlsruhe) proposed following scheme • MAGIC Virtual Organization created within EGEE-II • Involves three national Grid centers • CNAF (Bologna) • PIC (Barcelona) • GridKA (Karlsruhe) • Connect MAGIC resources to enable collaboration • 2 subsystems • MC (Monte Carlo) • Analysis • Start with MC first

  11. Merge the shower simulation and the StarLight simulation and produce a MonteCarlo data sample I need 1.5 million hadronic showers with Energy E, Direction (theta, phi), ... As background sample for observation of „Crab nebula“ Simulate the Starlight Background for a given position in the sky and register output data Run Magic MonteCarlo Simulation and register output data Run Magic MonteCarlo Simulation and register output data Run Magic MonteCarlo Simulation and register output data Run Magic MonteCarlo Simulation and register output data Run Magic Monte Carlo Simulation (MMCS) and register output data Simulate the response of the MAGIC camera for all interesting reflector files and register output data Simulate the Telescope Geometry with the reflector program for all interesting MMCS files and register output data MC Workflow

  12. Implementation • 3 main components: • meta data base • bookkeeping of the requests, their jobs and the data • Requestor • user define the parameters by inserting the request to the meta data base • Executor • creates Grid jobs by checking the metadatabase frequently (cron) and generating the input files

  13. Last data challenge (from September 2005) produced ~15000 simulated g-ray showers, ~4% failures • After that H. Kornmayer left the project, has been stalled since. • New crew taking over the VO (UCM Madrid + INSA + Dortmund). • Plan to start producing MC for MAGIC-II soon Status MC production

  14. Crab Nebula September 2006 MAGIC • MAGIC will share data with other experiments (GLAST, Veritas, HESS... More?) • There might be some time reserved for external observers (from experiment to observatory) • In general, MAGIC results should be more accessible to the astrophysics community • MAGIC will release data at PIC datacenter using GRID technology in FITS format • Step by step approach: • Published data (skymaps, light-curves, spectra, …) → imminent • Data shared with other experiments (GLAST) → soon • Data for external observers → mid-term • A standard format has to be defined (other experiments, future CTA) • Eventually integrated within a Virtual Observatory (under investigation) Virtual observatory

  15. Meta Data Database MAGIC REQUESTOR (CLIENT) Bookkeeping of the requests, their jobs and the data The user specifies the parameters of a particular job request through an interface. MAGIC EXECUTOR (SERVICE) The Server application creates Grid template files that are sent to each of the available Grid resources. GRID JOB TEMPLATE The Server creates the template using the Middleware (gLite, LCG) and submits the jobs to the GRID for execution. VO TOOLS The MAGIC Requestor should allow the interaction with VO applications to be as close as possible with the new emerging astronomical applications. SOFTWARE - MMCS - Reflector - Camera GRID The workflow is executed in the available Grid nodes within the MAGIC Virtual Organization. The products are stored in a Data Product Storage unit. VIRTUAL OBSERVATORY ACCESSIBILITY The MAGIC Executor gets notified when the jobs have finished. The application will be designed to send the output data to persistent layer compliant with the emerging VOSpace protocol. (To Be Implemented). MAGIC-GRID Architectural Design proposal MAGIC Request (VOTable). SOAP Message (*) MAGIC Submitted Job status information and results Information DATA PRODUCT STORAGE UNIT (*) SOAP Message: Simple Object Access Protocol VOTable: XML standard for interchange of data represented as a set of tables (http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html)

  16. MAGIC scientific program requires large computing power and storage capacity • Data center at PIC/IFAE (Barcelona) is up and start official operation in September 2008 with MAGIC-II • Massive MC production for MAGIC-II will involve GRID • (Some) Data will be release through virtual observatory • Good benchmark for other astro-particle present and future projects Summary

More Related