430 likes | 524 Views
CC energy analysis working group report. D.A. Petyt, J. Nelson, J. Thomas March ‘03. CC energy analysis : Our main tool to demonstrate the existence of neutrino oscillations and measure the mixing parameters.
E N D
CC energy analysis working group report D.A. Petyt, J. Nelson, J. Thomas March ‘03 CC energy analysis: • Our main tool to demonstrate the existence of neutrino oscillations and measure the mixing parameters. • Ultimate sensitivity dependent on the control of beam systematics and near-far energy calibration to the few % level. Near-term issues • Clear definition of physics goals • Formulation of a 5 year plan • Recalculation of sensitivities with more realistic reconstruction and an improved estimate of beam systematics • Physics capabilities versus p.o.t.
Structure of this talk • Physics goals for the CC energy analysis • Current status of sensitivity calculations • Assumptions and limitations • Thoughts on a 5 year plan • Beam systematics – current status and impact of MIPP • 2 short talks on current analysis work • Chris Smith on QEL events (15 min) • D.P on NC/CC separation (15 min) • Near-term goals and list of tasks • Longer term goals
Physics goals for the CC energy analysis • The major goals of the CC energy analysis are: • To observe a statistically significant (>5s) distortion in the CC energy spectrum. • To demonstrate that the nm survival probability follows the predicted functional form (i.e. sin2(1.27Dm2L/E)) • See a ‘dip and rise’ in the energy spectrum ratio • Exclude non-standard oscillation hypotheses (nu decay, extra dimensions) that provide a plausible fit to S-K data at >99% C.L. • To make a precision (<10%) measurement of the oscillation parameters Dm223 and sin22q23. Is q23 maximal? • The importance of making a precision measurement of the oscillation parameters: • Future running scenarios – choice of beam energy • Applying background corrections to other analyses (e.g. CC background in NC sterile nu search) • Future experiments – optimum L/E for off-axis detector, optimum baseline for maximal CP violating effect.
Current sensitivities • Perform a c2 fit between oscillated far detector spectrum with parameters (Dm2,sin22q)and unoscillated far spectrum. • True energies smeared by parameterised resolution functions • Assume the following systematic errors: • 2% overall flux uncertainty • 2% bin-to-bin flux uncertainty (1 GeV bins) • 2% overall CC efficiency uncertainty + (2-En)*1.5% below 2 GeV • 90% C.L. allowed region is defined by c2min+4.61
Issues for ‘fast MC’ analyses • Two reasons why energy smearing may be underestimated in previous studies: • Smearing true shower energies neglects rest mass of final state particles • Nuclear effects (pion re-absorption) are not accounted for • These could be taken into account by using the reconstructed energies of GMINOS events, although uncertainties in nuclear effects will be a source of systematic error
5 year planning • Future planning: • What is the optimum strategy to achieve our physics goals in terms of detector exposure and beam energy? • Clearly a function of Dm2. For instance, what do we do if Dm2 is low? Run longer? Try to increase the flux of low energy neutrinos in the beam? • What beam energy should we start with? LE is optimal for measuring Dm2, but is it worth running at higher energy for a short period to check systematics? • Low resolution Dm2 measurement (to indicate which beam is optimal). How many p.o.t. do we need to indicate whether Dm2 is ‘high’ (>4x10-3), ‘nominal’ (~2.5x10-3) or low (<1.5x10-3). Simplified analyses? (no calibration). Could a rate test provide enough discrimination? • What about other beam options that may be beneficial to other MINOS analyses? • Beam plug – a help or a hindrance? • Anti-neutrino running - search for CPT violation?
Parameter errors as a function of Dm2 • Measure extent of 90% C.L contours assuming maximal mixing and an exposure equivalent to 2 years with the baseline proton intensity • Measurement errors at Super-K best-fit point ~12% • Cross-over point between Ph2le and Ph2me sensitivity: Dm2=0.005 eV2
Parameter errors as a function of p.o.t. Error 1/N 1 yr, baseline 5 yr, baseline
Optimal beam energy – estimating Dm2 • Ph2le spectrum provides clear discrimination between ‘low’ and ‘high’ Dm2. c2 maximum occurs at 0.005 eV2.
Optimal beam energy – estimating Dm2 • Discrimination between low and high dm can be made with a relatively short exposure. A 2 month run should be sufficient to indicate whether le or me is the optimum beam energy • Statistical errors dominate. Requirements on beam systematics and calibration can therefore be less stringent
Beam systematics • Predict far spectrum by applying a transfer function to a measurement of the spectrum in the near detector. • Variations in this function depend primarily on pion production uncertainties. • At low energy (<6 GeV) in Ph2le, F/N ratio closely follows pion lifetime and model predictions agree to 2% • Large deviations observed above 6-7 GeV
Ph2me uncertainties • Ph2me spectrum predictions are good up to ~10 GeV. • Ph2le provides better sensitivity to oscillations in the Super-K range, but is it worth running for a short period at higher energy to measure the ‘unoscillated’ portion of the energy spectrum? • Is there a clear physics benefit to doing this or is it a sanity check? Do you lose if you restrict the Ph2le fit to low energies where the systematics are small?
Ph2le Goodness of fit test 1 expt, generate with Malensek, fit with GFLUKA c2 distributions for 1000 experiments • Poor agreement above 5 GeV • Can we claim to have seen oscillations if we don’t understand the high energy portion of our spectrum?
Parameter sensitivity with/without systematics • Take the differences between 3 model predictions of the far spectrum as an estimate of the systematic error (neglect Malensek) • This is probably an underestimate of the true error. • Assuming ‘hose’ curve represents low statistical error, the main effects of systematics are at low Dm2.
Exotics 1: Neutrino decay • S-K data fits well to nmnt oscillation hypothesis, but other models, which have a different L/E dependence can also fit the data. • Can MINOS rule out these other models using the CC energy test? n decay prob: (high Dm2) No osc. (low Dm2) Osc. Dm2=0.003 n decay Generate1000 expts assuming nu decay. Plot c2 difference between best fit to standard osc. and nu decay hypothesis
No osc. Osc. Dm2=0.003 n decay Exotics 2: Extra dimensions Extra dimensions prob: • Oscillation fit can provide acceptable c2 in some cases, spurious signals tend to occur at large values of Dm2 and low values of sin22q (see bottom left plot) Location of false minima in osc. parameter space
Systematics after MIPP • Slides from Mark go here
Reports on recent work • Two short talks on recent CC energy analysis studies: • Chris Smith on parameter measurement using quasi-elastic events • D.P. on NC/CC separation in the framework
Analysis issue: NC contamination • Mis-identified NC events will tend to congregate at low visible (reconstructed) energies. • This is where the oscillation signal is located for low values of dmsq. Large NC contamination in these energy bins will degrade sensitivity and obscure any ‘rise’ of the spectrum ratio due to oscillations. • Current sensitivity calculations do not subtract the NC component before fitting the CC spectrum – doing this will improve the parameter sensitivity, but also introduce an additional systematic error. • There is presumably a trade-off between high CC efficiency at low energy and low NC contamination – need to quantify this in order to determine what the optimum selection of CC events is.
Selection efficiencies as a function of En and y NC events passing the event length cut
Unoscillated CC Oscillated CC (Dm2=0.0025) Mis-identified NC Reconstructed energy distributions
Recent work on NC/CC separation • Analysis 1: repeat old cut-based separation with new C++ reconstruction framework • Event length cut + ‘track fraction’ cuts are effective for high energy events
Recent work on NC/CC separation • Alternative cuts for short events can increase efficiency at low energy. • Limited scope for improvement – large overlap between CC and NC distributions
Recent work on NC/CC separation • Histograms – new efficiencies, points – old RECO_MINOS efficiencies
Recent work on NC/CC separation • Analysis 2: Likelihood based separation of CC and NC. Use distributions as PDFs
Recent work on NC/CC separation • Calculate the probability that a given event comes from the CC or NC distributions: • Calculate a PID parameter (following Super-K) based on the difference in log likelihood between the CC and NC hypotheses CC NC
Recent work on NC/CC separation Parameters are correlated – 2D (3D) pdfs?
Recent work on NC/CC separation 2D(ph. per plane vs track fraction) 1D (track length) pdfs
Recent work on NC/CC separation CC NC 3D pdf
Recent work on NC/CC separation 3D pdf performance
Recent work on NC/CC separation 1D pdfs 2D 1D pdfs 3D pdf 3D pdf appears to provide by far the best discrimination Same events used to define pdf and determine efficiencies
Recent work on NC/CC separation Evaluate efficiencies with independent sample of events – 3D efficiency falls apart: 505050 pdf under-populated
Recent work on NC/CC separation 3D pdfs 505050 bins, same sample 202020 bins, same sample 202020, independent sample Fewer bins reduces low energy CC efficiency. Large statistics MC required to make 3D pdfs work
Near-term priorities • Improved systematics • Sensitivities, especially as a function of p.o.t. depend critically on beam systematics. • Calibration errors on neutrino energy scale • Improved reconstruction • Use reconstructed GMINOS events rather than toy MC events smeared by resolution functions • Calculate sensitivities using improved CC selection efficiencies based on new studies. • PAC questions • Most, if not all of these are within our group’s remit
Near-term ‘Hot Topics’ 1) CC/NC separation techniques • what is the optimum technique to separate CC and NC. • Simple cuts • Likelihood • Neural networks • what is the trade-off between CC efficiency and NC inefficiency 2) Track fitting - resolution studies? • what momentum resolutions do we obtain with the existing track fitter(s)? How do they compare to our canonical resolution functions? 3) Shower finding/total energy measurements • how well do we reconstruct showers in the framework? What tuning of the algorithm is required? Alternative shower finders? • estimating the shower energy. Total energy measurement. Reconstruction biases? (for example, due to mis-reconstructed muon tracks) 4) Comparison of Near/far reconstruction 5) Using CalDet data for this analysis 6) How do we predict the far detector spectrum? • Better understanding of beam systematics • Defining the systematic error envelope for the far spectrum
Near-term ‘Hot Topics’ 1) CC/NC separation techniquesPetyt, Athens, Smith(?) • what is the optimum technique to separate CC and NC. • Simple cuts • Likelihood • Neural networks • what is the trade-off between CC efficiency and NC inefficiency 2) Track fitting - resolution studies? Cambridge, Piteira, Avvakumov(?) • what momentum resolutions do we obtain with the existing track fitter(s)? How do they compare to our canonical resolution functions? 3) Shower finding/total energy measurementsCambridge(?) • how well do we reconstruct showers in the framework? What tuning of the algorithm is required? Alternative shower finders? • estimating the shower energy. Total energy measurement. Reconstruction biases? (for example, due to mis-reconstructed muon tracks) 4) Comparison of Near/far reconstruction 5) Using CalDet data for this analysis Thomas(?) 6) How do we predict the far detector spectrum? • Better understanding of beam systematics • Defining the systematic error envelope for the far spectrum
Longer-term goals • Selected topics from Jeff’s list