150 likes | 237 Views
Preliminary results for the Gaia-ESO UVES tests. M. Steffen. Team members. Expected FTE UVES C. Allende Prieto (IAC) 0.1 Matthias Steffen (AIP) 0.3
E N D
Team members Expected FTE UVES • C. Allende Prieto (IAC) 0.1 • Matthias Steffen (AIP) 0.3 • Swetlana Hubrig (AIP) 0.1 • Eric Depagne (AIP) 0.2 • Michael Weber (AIP) 0.1 total 0.8
Methodology • F90 code ferre • Direct 2 minimization of the difference between model fluxes and observations • Gaussian PSF, cubic Bezier model flux interpolation from a multi-dimensional database (Teff,logg,[Fe/H],[/Fe], micro…) • Kurucz and MARCS databases in hand
Methodology • 3D analysis (Teff,logg,[Fe/H]): 2 min per star (20 for 10 runs on a single core) • 4D analysis (Teff,logg,[Fe/H],[/Fe]): 12 min per star (2 hours for 10 runs on a single core) • 5D analysis (Teff,logg,[Fe/H],[/Fe], micro): 45 min per star (7 hours for 10 runs on 1 core) • BUT problem embarrashingly parallel. (Code already parallel with OpenMP). So, same computing time for 1 star or 1000 stars in a 1000-core cluster.
Methodology • Continuum (re-)normalization done by breaking the spectrum is pieces (1000) and dividing each by its mean value. Linear, hence robust to noise, though that is not a worry in this particular test. • Same treatment applied to models and observations
Test on 4 stars We have performed different analyses • Kurucz models/linelists (3D and 4D) • Kurucz fluxes but forcing a perfect match to the solar spectrum • MARCS models/VALD line lists (4D and 5D)
Reference stars • star MRD MPD MRG MPG MRD MPD MRG MPG MRD MPD MRG MPG Reference 5779. 5308. 4433. 4247. 4.44 4.41 2.50 1.59 0.00 -0.89 +0.29 -0.54 K3dfix 5820 5555 4633 4444 4.55 4.45 3.27 2.36 0.01 -0.48 -0.01 -0.59 K4d 5840 5365 4459 4385 4.86 4.77 3.20 2.36 -0.31 -1.13 -0.27 -0.79 K4dfix 5818 5595 4567 4495 4.54 4.48 3.23 2.42 0.01 -0.46 -0.06 -0.52 M4d 5864 5490 4713 4378 4.49 4.54 3.43 2.00 -0.27 -0.98 -0.00 -0.80 M4dnomac 5972 5658 4578 4406 4.83 4.93 3.36 2.48 -0.26 -0.86 -0.27 -0.86 M5d 5594 5249 4417 4366 4.20 4.21 2.48 2.00 -0.31 -1.02 -0.09 -0.79
Reference stars Teff logg [Fe/H] • Kurucz 3D (solar match forced) <diff> 171 0.42 0.02 stdev 90 0.40 0.29 • Kurucz 4D <diff> 71 0.56 -0.34 stdev 48 0.20 0.15 • Kurucz 4D (solar match forced) <diff> 177 0.43 0.03 stdev 113 0.40 0.32 • MARCS 4D (macro included) <diff> 170 0.38 -0.23 stdev 84 0.40 0.09 • MARCS 4D <diff> 212 0.67 -0.28 stdev 94 0.25 0.24 • M5D (macro included) <diff> -35 -0.01 -0.27 stdev 125 0.30 0.11
Other tests • Used the simulations from Nice’s group (50 stars with random Gaussian noise) • Last year we analyzed them using Kurucz’s models and found results independent from 40<S/N<100 ([Fe/H])= 0.11 dex, ([/Fe])= 0.08 dex, (Teff)= 130 K, (logg)= 0.24 dex • We have now repeated the experiment with the MARCS (4D) grid to find ([Fe/H])= 0.22 dex, ([/Fe])= 0.02 dex, (Teff)= 162 K, (logg)= 0.33 dex
Differences • Model atmospheres and line list (MARCS presumably consistent). Both use micro=2.0 km/s (though there is also a MARCS grid with variable micro) • MARCS finer steps in [/Fe] (0.1 vs. 0.5 dex) -- explains smaller scatter in this quantity for MARCS • Kurucz covers a wider range in Teff (3500-6500 K), [/Fe] (-1 to +1) and logg (0.5-5.), but both cover range of simulations • Unclear why we fit better with Kurucz simulations made with MARCS: solar reference abundances? An error somewhere?
Conclusions • We are getting decent results, but have not yet reached the performance level that is desirable. • The code is fast enough for the data flow of the survey. • We plan to explore different continuum norm. procedures and compare performances • MARCS and Kurucz models do similarly with these test stars. • We match better the parameters of the MARCS-based simulations with Kurucz models than with MARCS, i.e. there must be an inconsistency yet to be identified.