630 likes | 798 Views
CC energy analysis working group report. D.A. Petyt, J. Nelson, J. Thomas March ‘03. CC energy analysis : Our main tool to validate the neutrino oscillation hypothesis and measure the mixing parameters.
E N D
CC energy analysis working group report D.A. Petyt, J. Nelson, J. Thomas March ‘03 CC energy analysis: • Our main tool to validate the neutrino oscillation hypothesis and measure the mixing parameters. • Ultimate sensitivity dependent on the control of beam systematics and near-far energy calibration to the few % level. Near-term issues • Clear definition of physics goals • Formulation of a 5 year plan • Physics capabilities versus p.o.t. • Recalculation of sensitivities with more realistic reconstruction and an improved estimate of beam (and other) systematics
Structure of this talk • Physics goals for the CC energy analysis • Current status of sensitivity calculations • Assumptions and limitations • Thoughts on a 5 year plan • Beam systematics – current status and impact of MIPP • 2 short talks on current analysis work • Chris Smith on QEL events (15 min) • D.P on NC/CC separation (15 min) • Near-term goals and list of tasks • Longer term goals
Physics goals for the CC energy analysis • The major goals of the CC energy analysis are: • To observe a statistically significant (>5s) distortion in the CC energy spectrum. • To verify that the nm survival probability follows the predicted functional form (i.e. sin2(1.27Dm2L/E)) • See a ‘dip and rise’ in the energy spectrum ratio • Exclude non-standard oscillation hypotheses (nu decay, extra dimensions) that provide a plausible fit to S-K data at >99% C.L. • To make a precision (<10%) measurement of the oscillation parameters Dm223 and sin22q23. Is q23 maximal? • The importance of making a precision measurement of the oscillation parameters: • Future running scenarios – choice of beam energy • Applying background corrections to other analyses (e.g. CC background in NC sterile nu search) • Future experiments – optimum L/E for off-axis detector, optimum baseline for maximal CP violating effect.
PAC questions December 11, 2002 “[…] we and the PAC would like to see an update of the goals of the experiment in the light of recent developments and of the need to express goals in a quantifiable manner. We have in mind, for example, a plot with Dm2atmos versus number of protons on target. The several contours might show: • 95% exclusion of the neutrino decay hypothesis; • 3s preference of oscillation hypothesis versus decay hypothesis; • 5s dip in the oscillation curve…” These questions need to be addressed by this analysis group.
Current sensitivities • Perform a c2 fit between oscillated far detector spectrum with parameters (Dm2,sin22q)and unoscillated far spectrum. • True energies smeared by parameterised resolution functions • Event selection based on simple cuts on event length, Hough transform and ph/plane variables • Assume the following systematic errors: • 2% overall flux uncertainty • 2% bin-to-bin flux uncertainty (1 GeV bins) • 2% overall CC efficiency uncertainty + (2-En)*1.5% below 2 GeV • 90% C.L. allowed region is defined by c2min+4.61
Issues for ‘fast MC’ analyses • Two reasons why energy smearing may be underestimated in previous studies: • Smearing true shower energies neglects rest mass of final state particles • Nuclear effects (pion re-absorption) are not accounted for • These could be taken into account by using the reconstructed energies of GMINOS events, although uncertainties in nuclear effects will be a source of systematic error
5 year planning • Future planning: • What is the optimum strategy to achieve our physics goals in terms of detector exposure and beam energy? • Clearly a function of Dm2. For instance, what do we do if Dm2 is low? Run longer? Try to increase the flux of low energy neutrinos in the beam? • What beam energy should we start with? LE is optimal for measuring Dm2, but is it worth running at higher energy for a short period to check systematics? • Low resolution Dm2 measurement (to indicate which beam is optimal). How many p.o.t. do we need to indicate whether Dm2 is ‘high’ (>4x10-3), ‘nominal’ (~2.5x10-3) or low (<1.5x10-3). Simplified analyses? (no calibration). Could a rate test provide enough discrimination? • What about other beam options that may be beneficial to other MINOS analyses? • Beam plug – a help or a hindrance? • Anti-neutrino running - search for CPT violation?
Parameter errors as a function of Dm2 • Measure extent of 90% C.L contours assuming maximal mixing and an exposure equivalent to 2 years with the baseline proton intensity • Measurement errors at Super-K best-fit point ~12% • Cross-over point between Ph2le and Ph2me sensitivity: Dm2=0.005 eV2
Parameter errors as a function of p.o.t. Error 1/N 1 yr, baseline 5 yr, baseline
Optimal beam energy – estimating Dm2 • Ph2le spectrum provides clear discrimination between ‘low’ and ‘high’ Dm2. c2 maximum occurs at 0.005 eV2.
Optimal beam energy – estimating Dm2 • Discrimination between low and high dm can be made with a relatively short exposure. A 2 month run should be sufficient to indicate whether le or me is the optimum beam energy • Statistical errors dominate. Requirements on beam systematics and calibration can therefore be less stringent
Beam systematics • Predict far spectrum by applying a transfer function to a measurement of the spectrum in the near detector. • Variations in this function depend primarily on pion production uncertainties. • At low energy (<6 GeV) in Ph2le, F/N ratio closely follows pion lifetime and model predictions agree to 2% • Large deviations observed above 6-7 GeV
Ph2me uncertainties • Ph2me spectrum predictions are good up to ~10 GeV. • Ph2le provides better sensitivity to oscillations in the Super-K range, but is it worth running for a short period at higher energy to measure the ‘unoscillated’ portion of the energy spectrum? • Is there a clear physics benefit to doing this or is it a sanity check? Do you lose if you restrict the Ph2le fit to low energies where the systematics are small?
Ph2le Goodness of fit test 1 expt, generate with Malensek, fit with GFLUKA c2 distributions for 1000 experiments • Poor agreement above 5 GeV • Can we claim to have seen oscillations if we don’t understand the high energy portion of our spectrum?
Parameter sensitivity with/without systematics • Take the differences between 3 model predictions of the far spectrum as an estimate of the systematic error (neglect Malensek) • This is probably an underestimate of the true error. • Assuming ‘hose’ curve represents low statistical error, the main effects of systematics are at low Dm2.
Exotics 1: Neutrino decay • S-K data fits well to nmnt oscillation hypothesis, but other models, which have a different L/E dependence can also fit the data. • Can MINOS rule out these other models using the CC energy test? n decay prob: (high Dm2) No osc. (low Dm2) Osc. Dm2=0.003 n decay Generate1000 expts assuming nu decay. Plot c2 difference between best fit to standard osc. and nu decay hypothesis
No osc. Osc. Dm2=0.003 n decay Exotics 2: Extra dimensions Extra dimensions prob: • Oscillation fit can provide acceptable c2 in some cases, spurious signals tend to occur at large values of Dm2 and low values of sin22q (see bottom left plot) Location of false minima in osc. parameter space
Reports on recent work • Two short talks on recent CC energy analysis studies: • Chris Smith on parameter measurement using quasi-elastic events • D.P. on NC/CC separation in the framework
Analysis issue: NC contamination • Mis-identified NC events will tend to congregate at low visible (reconstructed) energies. • This is where the oscillation signal is located for low values of dmsq. Large NC contamination in these energy bins will degrade sensitivity and obscure any ‘rise’ of the spectrum ratio due to oscillations. • Current sensitivity calculations do not subtract the NC component before fitting the CC spectrum – doing this will improve the parameter sensitivity, but also introduce an additional systematic error. • There is presumably a trade-off between high CC efficiency at low energy and low NC contamination – need to quantify this in order to determine what the optimum selection of CC events is.
Selection efficiencies as a function of En and y NC events passing the event length cut
Unoscillated CC Oscillated CC (Dm2=0.0025) Mis-identified NC Reconstructed energy distributions
Update on NC/CC separation D.A. Petyt March ‘03 • At the previous phone meeting I presented a method to separate NC/CC using simple cuts on reconstructed quantities available in eventsr.root. • summarised in the next couple of slides. • At low neutrino energy, there is large overlap between CC and NC distributions – likelihood or neural net methods are likely to provide better separation for these events. • I have looked into a likelihood-based analysis and present the first results here. • Ultimately, new (improved) efficiencies will replace the old reco_minos-derived efficiencies that are currently used in the CC energy sensitivity calculations.
Cut-based NC/CC separation • Event length cut efficient for separating NC/CC at high energy (histogram). Addition of ‘track fraction’ cut is an improvement over old reco_minos analysis. • Alternative cuts for short events can increase efficiency at low energy (points), at the expense of increased NC background. • Limited scope for improvement at low energies – large overlap between CC and NC distributions
Comparing old and new efficiencies • Histograms – new efficiencies, points – old RECO_MINOS efficiencies
Likelihood-based separation • Analysis 2: Likelihood based separation of CC and NC. Use distributions as PDFs
Constructing the likelihood • Calculate the probability that a given event comes from the CC or NC distributions: • Calculate a PID parameter (following Super-K) based on the difference in log likelihood between the CC and NC hypotheses CC NC
Efficiencies for 2 values of the PID cut cuts likelihood Likelihood analysis with this PID cut seems to be slightly better than the cut-based analysis
Signal efficiency vs background rejection Can tune the PID cut for whatever signal efficiency, background rejection you want. What is the optimum?
Correlations 1: mean ph/plane vs track-like fraction LHS: nm CC RHS: NC Parameters are correlated – possible extra discrimination is thrown away when forming a product of 3 1D pdfs
Correlations 3: track-like fraction vs track planes Double peak in nm CC distribution