1 / 24

NIKHEF point source search: final update

NIKHEF point source search: final update. Claudio Bogazzi AWG Videoconference 28/07/2011. Outline. Runs selection and final sample Data / MC comparison The algorithm for the search Ingredients Q distribution: - fixed search

alton
Download Presentation

NIKHEF point source search: final update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NIKHEF point source search:final update Claudio Bogazzi AWG Videoconference 28/07/2011

  2. Outline • Runs selection and final sample • Data / MC comparison • The algorithm for the search • Ingredients • Q distribution: - fixed search - full sky search - candidate search • Sensitivity and discovery potentials: - fixed search - full sky search - candidate search

  3. Documentation • July 13th , first version of the internal note submitted. • July 21st, second version of the internal note submitted. • July 28th, third version! • What has been changed from the 2nd version to the last one? 1) All plots made with the SeaTray and run-by-run production. 2) Candidate search results included (see later). • What else? • http://www.nikhef.nl/~claudiob/pnt2/index.shtml All the old talks of this analysis are here. Q & A answer continuously updated.

  4. Final runs selection • Adopted Veronique’s criteria for the basic classification of runs with one exception: included also runs with livetime < 1000s. • DataQuality database table not up to date for Nov and Dec 2010 -> ran our own script to select basic runs for this period (same criteria) • Not all the basic runs reconstructed within SeaTray (21 days) -> Thomas recovered so far 124 runs -> not yet available in the DST format (both analyses uses it). • Not all the basic runs for the run-by-run MC (see http://antares.in2p3.fr/internal/dokuwiki/doku.php?id=list_of_problematic_runs ) Total basic runs = 7735 DST basic runs = 7322 DST & RBR basic runs = 7162 25 sparking runs: 36600, 33610, 38352, 41668, 38347, 34663, 38482, 42511, 33608, 38355, 38351, 36689, 44035, 42507, 38357, 38349, 38348, 44030, 43215, 36670, 43196, 38353, 36666, 43684, 44070.

  5. Data / MC comparison Λ > -5.2 & β < 1° Left: cos(zenith) distribution (black = data, red = up/down nu/anu, purple = mupage. Right: cumulative lambda distribution More plots here: http://antares.in2p3.fr/users/heijboer/PNT/plots_v2/ALL.html

  6. Data / MC comparison: lambda Not from L5 data weird peak at -7! Only for a limited period of runs

  7. High baseline runs High baseline => lot of background events not modeled in the Monte Carlo. Kept these runs since the discrepancy is only in the lambda region we don’t consider in the final sample.

  8. Optimization of the cut • Previous analysis (2007 – 2008 data): lambda chosen in order to optimize the sensitivity. • Final choice was Λ > -5.4 => high muon contribution (40%). • For the current analysis we decide to choose the lambda value by optimizing the discovery potential. • Final choice: Λ > -5.2 => 14% muon contribution.

  9. Final sample • Cuts: β < 1°, Λ > -5.2 and θ < 90°. • The final sample consists of 3007 events. Error estimate, β, distribution 2007 – 2008 analysis: 2190 events with Λ > -5.4 (1314 neutrinos) 2007 – 2010 analysis: 3007 events with Λ > -5.2 (2586 neutrinos) Factor 2 more neutrinos.

  10. Search method • Unbinned likelihood method. • Included information on the number of hits. Point spread function Background rate Number of hits

  11. Ingredients • Let’s make a distinction between the PE generation level and the likelihood computation. • PE generation: - background: sin(decl) distribution & Nhits distribution (both from data) - signal: PSF & Nhits extracted from 3D distribution sin(decl) vs log(β) vs Nhits 2) Likelihood computation: - background: sin(decl) parameterization from data and Nhits from MC (mu + nu) - signal: spline parameterization for the PSF and E-2 nu for Nhits

  12. Ingredients in plots Lik + PE PE PE - bkg lik - bkg Lik

  13. Gain with Nhits Fixed search: 3σ -> 34% better 5σ -> 25% better Full sky search: 3σ -> 29% better 5σ -> 22% better Full sky search, δ = -70°, Λ > -5.2

  14. Systematic on Nhits • It will be treated as for the angular resolution systematic (random number from a Gaussian with width equal to our uncertainty). • Value not yet decided. • Tested the algorithm with neutrino files with reduced OM acceptance (85%) (only for the aafit production) • Less than 10% effect on the sensitivity • 5% for the 3σ discovery pot. • Negligible for the 5σ discovery pot.

  15. Systematic on Nhits • Computed the sensitivity with a 10% systematic on Nhits (even if we think is a conservative value) • A challenge for everybody: find the difference! • 10% systematic has a negligible effect on the sensitivity! no sys sys

  16. Systematic (same as before) • Absolute pointing: 0.13° for φ, 0.06° for θ . To take into account these values, we smear the θ and ϕ angles by two random variables from a Gaussian distribution with the above sigmas as weidth. • Angular resolution: 15% . We take into account this number by generating a resolution-scale factor drawn from a Gaussian distribution with mean 1 and width equal to the uncertainty. • Background model: vary the background rate used for PE generation according to 2 different spline parameterizations to the data. • Acceptance: 15%. Not done in the PE but in the limit setting code

  17. Q - distributions Full sky search, δ = -70° Q3σ = 15.46 Q5σ = 22.02 Candidate search Q3σ = 7.55 Q5σ = 15.67 HESS-J1837-067 = [-6.95, 279.41] Fixed search, δ = 0° Q3σ = 3.77 Q5σ = 12.00

  18. Discovery potentials – full sky Solid = 5σ Dashed = 3σ Solid = 5σ Dashed = 3σ δ = -70°

  19. Sensitivity - fixed search Λ > -5.2 x 2.75 gain

  20. Candidate list • 50 sources: 24 “old” + 26 “new” (old = in the previous analysis). • 26 “new” sources: 18 TeV GALACTIC sources + 8 GeV EXTRA GALACTIC ones. • Selection done considering ONLY the convolution between the visibility and the gamma ray flux for the source.

  21. Candidate search • Algorithm tested for one source of the list: HESS J1837 – 067 (same one as in the previous analysis). δ = -6.95 α = 279.41. • For each PE we define 50 clusters (one for every source) • Signal events (up to 20) added only for HESS J1837 – 067. • Results from the fit: . δfit = -6.96 αfit = 279.38. • 10.64 fitted signal events with 11 injected. 3σ = 3.29 5σ = 5.44 A source with a signal that is 3σ significative in the full sky search, will be almost 5σ in the candidate search.

  22. Crosschecks • Spent last week crosschecking every step of the analysis with Valencia. • Acceptance: very small difference probably due to different binnings ✓ • Selected events: 3007 SAME events for both analysis ✓ • Currently the last crosscheck is on going: Aart generated some PE for the full sky search -> goal is to see the s / b separation by the two methods. • Results expected before the end of the week.

  23. Conclusion • The analysis has been finalized. • Finally adopted the official SeaTray production and run-by-run MC. • Factor 2.5 gain in the sensitivity, compared with the previous analysis. • Big improvement in the discovery potential using the Nhits information: for the full sky search 7.68 -> 5.94 for 3σ (29% better) and 9.64 -> 7.9 for the 5σ (22% better), declination -70°. • We official ask to un-blind the data. • If it’s ok, the actual unblinding will be done AFTER last crosscheck with Valencia (see next slide for proposal).

  24. We now formally request to unblind the 2007-2010 data • so let us know now if you agree/object • However: we also propose the following • Even if unblinding is approved now, we will not unblind until next week • We will have another public meeting before we unblind with 2 purposes • Another opportunity for people to raise questions / concerns(if they expose problems, we will halt the unblinding) • Discuss some technical issues (who runs which jobs, etc) • if cross-checks*, further internal checks, or any questions we receive • by email or in a meeting yield serious problems/concerns, then we will: have at least another meeting to discuss the issue and • re-request unblinding • So we are in fact asking for “ provisional permission to unblind, provided • no new issues arise in the next week”). And there will be more opportunities to raise questions in the mean time. * many cross-checks already done, only detailed'data-challenge'.

More Related