330 likes | 433 Views
FD event selection and data/MC comparisons. D. A. Petyt 13/10/05. Motivation of this study Look at FD events (with blinding scheme imposed) to determine Whether we observe neutrino interactions at roughly the expected rate (i.e constant number of nu/pot)
E N D
FD event selection and data/MC comparisons D. A. Petyt 13/10/05 • Motivation of this study • Look at FD events (with blinding scheme imposed) to determine • Whether we observe neutrino interactions at roughly the expected rate (i.e constant number of nu/pot) • Whether the events look “OK” – both in terms of general appearance and recosntructed quantites • What backgrounds exist and how to remove them • Two approaches: • Visual scan of “spill” events • To get a feel for what they look like and how easy it is to distinguish signal from background • Simple cut-based analysis • A few simple timing and topological cuts to separate signal from background
Results of the scan • Scanned reconstructed FD “spill” files (with blinding applied) from Jun 1 – 19 Sept (LE-10 running, 5.1e19 p.o.t). • 1348 events scanned and classified into the following categories: • Comments on the scan: • Beam neutrino events are in general very distinctive and it is very easy to distinguish them from background • The large number of “junk” events is dominated by radioactive noise in the spill trigger window. These are also pretty distinctive although it may be difficult to separate these from the very lowest energy NC events • There are a few LI events in the “spill” files, most (if not all) are accompanied by trigger PMT hits (a cut on this removes 15 or the 17 LI events). In any case, they are rather distinctive and do not look at all like neutrino interactions • The “problem” events are typically small events that occur close to the edge of the detector • Small showers that could be NC events or incoming junk • Short tracks where the directionality is not clear.
Cut #2: timing wrt FD spill prediction: –20us<t<30us Junk event
Cut #3: event must contain a reconstructed track Vast majority of “junk” is 0 track + 1 shower…
Event vertex vs track vertex • Andy Culling has observed that event vertices for cosmic muons in R1.18 can be displaced by the presence of brems on the track • The plot at right shows the displacement in the x-y plane for 5 cosmic muons that passed event containment cuts • Should thereforeuse track vertex co-ordinates for containment cuts whenever possible.
Cut #5: track vertex r2<14 m2 Cosmics pile up around detector edge – expect uniform distribution for neutrinos
Result of cut sequence • On the assumption that all 139 candidates are real neutrinos, the efficiency of the selection cuts is 88/139=63.3%.
Effect of cuts on MC events • Efficiency of cuts on MC: 81797/128631=63.6%, which agrees well with data (assuming scan efficiency is high of course…)
Effect of cuts on cosmic muons • Applied selection cuts to 1 month of cosmic events from “all” stream • This is not a completely fair comparison since different reconstruction algorithms are used for “all” and “spill” • Numbers: • 1355680 events, 1141211 with 1 or more tracks • 85473 with reco_dircosneu>0.6 • 1438 inside fid volume (mostly multiple muons, events with reco problems) • Assuming 1 spill every 2 seconds and 50us timing cut, expect 0.03 events/month to pass cuts. Scan of spill events shows no obvious cosmic background in selected sample.
Why use a z<28m cut? • Z<28 m cut is motivated by physics rather than background considerations • CC selection efficiency falls off rapidly above 28 m – do not observe a clear muon in these events…
Why use a z<28m cut? • Trade-off between energy resolution of selected events and statistics: • Optimum is for a 28m cut.
Events with trk.fit.pass=0 NC • What should we do about the ~5% of events that fail the trk.fit.pass criterion: • These events do not yield a q/p measurement, only a range measurement • They could also be “junk” tracks, but this variable isn’t always a good indicator of that • There is some flavour information in the events – perhaps simply counting their number would yield additional sensitivity? • I am investigating this… CC More failures around coil hole
Data/MC match-up • The next several slides show comparisons between basic distributions for selected data and MC events (with the same cuts applied) • Because of blinding/oscillations, we don’t expect all distributions to agree • Physics distributions will be distorted, but we should at least be able to check if there are any glaring pathologies (i.e. unphysical “spikes”) • Lower level quantities (such as pulse height/plane) should be unaffected • I have indicated the distributions that will be significantly affected by blinding/oscillations with the label “B” • All plots are normalised to the same area.
Track vertex x-y projection • LHS plot should be uniform in x-y • RHS plot should be focussed towards origin for numu events and defocussed for numu_bar events
Track Vertices #2 Background could appear in these areas Background could appear in these areas
Track endpoints • No glaring discrepancies to report with these statistics… y
Track fit parameters • Track fitting performance seems very similar between data and MC. • Similar rate of failures <5% (slightly lower in data than MC) • Track q/p distribution distorted by blinding/oscillations… B
Track variables B • Track length is (should be!) distorted. • Track ph/plane variables should be largely invariant • Slight excess of high ph/plane events in data? • Track digits/plane slightly lower in data • Could be caused by lower per plane tracking efficiency in the data
Shower reconstruction • Data showers tend to reconstruct more clusters with higher average energy • Could be an artifact of blinding • However shower pulse height per digit is also somewhat higher – this should be largely invariant to blinding/osc. B
“Physics” quantities B • Most of these will be distorted by blinding and oscillations • However, the general shape of the distributions seems reasonable – there are no glaring “pathologies”… B B
Per-strip quantities • Track and shower ph/strip seem slightly (~5%) lower in data than MC.
Likelihood-based PDF B • Optimal cut to separate CC and NC (for R1.18) is PID>-0.2 • Higher fraction of NC-like events in the data, which is largely due to the shorter event length distribution. This could be due to oscillations and/or blinding…
Scanned events as a function of time • The number of scanned neutrino events as a function of time follows the number of p.o.t. recorded • The ratio of the two is constant • Some corrections to p.o.t. (or #predicted events) due to • p.o.t. recalibration (-4.2%, -4.8%) • LE to LE-10 renormalisation (-2.1%)
Selected events as a function of time Number of neutrino events selected by cuts also follows p.o.t. distribution fairly closely.
How many anti-neutrinos? • A quick look with the following cuts: • Event passes selection cuts • Trk.fit.pass==1 • Track length>30 planes • Sigma(q/p)/(q/p)<0.3 • See 41 nu and 3 nu-bar • Expected ratio in absence of oscillations and blinding (which should not have too large an effect) is ~10:1
Summary and conclusions • Beam neutrino events are very distinctive and it is quite easy to isolate them with a set of simple timing/topology cuts • The backgrounds (at least in the track sample) appear to be small and can probably be reduced further. This may allow us to use a larger fiducial region for the analysis • So far, the match-up of data and MC shows no large discrepancies: • At least as far as we can tell with these statistics and with blinding/oscillations • There are some small differences in “invariant” quantities, which should be investigated further, but no pathological effects have been observed