100 likes | 113 Views
GREAT Workshop on Astrostatistics and Data Mining in Astrnomical Databases La Palma, Spain May 30 - June 3, 2011. Data Management at Gaia Data Processing Centers. Pilar de Teodoro Idiago Gaia Database Administrator European Space Astronomy Center (ESAC) Madrid Spain
E N D
GREAT Workshop on Astrostatistics and Data Mining in Astrnomical Databases La Palma, Spain May 30 - June 3, 2011 Data Management at Gaia Data Processing Centers Pilar de Teodoro Idiago Gaia Database Administrator European Space Astronomy Center (ESAC) Madrid Spain http://www.rssd.esa.int/Gaia
Data Processing Centres * DPCE (ESAC) * DPCB (Barcelona) * DPCC (CNES) * DPCG (Obs. Geneva / ISDC) * DPCI (IoA, Cambridge) * DPCT (Torino) All contributed to this talk Data Processing Centers
Processing Overview (simplified) Many iterations Many iterations Initial Data Treatment Turn CCD transits into source observations on sky Should be linear transform CU3 Catalogue Astrometric Treatment Fix geometrical calibration Adjust Attitude Fix source positions Solar System CU4 CU3/SOC Variability CU7 Photometry Treatment Calibrate flux scale give magnitudes Astrophysical Parameters CU8 CU5 Non Single Systems Spectral Treatment Calibrate and disentangle provide s spectra CU4 CU6
DPCC (CNES) • CU4 (Objects Processing), • CU6 (Spectroscopic processing) • CU8 (Astrophysical Parameters) • Solutions based on: • performance • scalability of the solution • data safety • impacts on the existing software • impacts on the hardware architecture • cost of the solution during the whole mission • durability of the solution • administration and monitoring tools
DPCG • Detection and characterization of variable sources observed by Gaia (CU7) • Analytical queries must be done over sources or processing results (attributes) to support unknown research requirements. • Timeseries reconstruction while importing MDB data • Parameter analysis for simulations and configurations changes on historical database. • ETL-like support must be done for external data. • At present Apache OpenJPA. Postgress used as well. • Other alternatives : Hadoop, SciDB, VoltDB and Extensions to PG.
DPCI • Given the use case: • bulk-processing of a large data set • data volume increases with time (DPAC-wide iterations) • We can state that: • Random data access is expensive and less efficient than sequential access. • Hub-and-Spoke architecture is prone to bottlenecks and therefore does not scale very well with the number of clients. • Hadoop adopted in 2009 • HDFS:distributed filesystem • Map/Reduce jobs to minimize synchronization • DAL much simpler
DPCT • CU3 AVU • IGSL support • Persistent data management