470 likes | 481 Views
Delve into the comparative analysis between Basic Particle Physics and Astroparticle Physics issues, emphasizing similarities, differences, and key learnings. Explore the challenges and advancements in parameter determination, hypothesis testing, and Bayesian vs. Frequentist approaches. Uncover the strengths of Astrophysicists in data visualization, large datasets, and sharing information. Gain insights into what Particle Physicists have discovered, including statistical rules, blind analyses, and Bayesian-Frequentist distinctions.
E N D
Perspective on Astrostatistics from Particle Physics Louis Lyons Particle Physics, Oxford (CDF experiment, Fermilab) l.lyons@physics.ox.ac.uk SCMA IV Penn State 15th June 2006
Topics • Basic Particle Physics analyses • Similarities between Particles and Astrophysics issues • Differences • What Astrophysicists do particularly well • What Particle Physicists have learnt • Conclusions
TypicalAnalysis • Parameter determination: dn/dt = 1/τ * exp(-t / τ) Worry about backgrounds, t resolution, t-dependent efficiency • 1) Reconstruct tracks • 2) Select real events • 3) Select wanted events • 4) Extract t from L and v • 5) Model signal and background • 6) Likelihood fit for lifetime and statistical error • 7) Estimate systematic error τ± στ(stat)± στ(syst) Goodness of fit
TypicalAnalysis Hypothesis testing: Peak or statistical fluctuation? • Why 5σ ? • Past experience • Look elsewhere • Bayes priors
Similarities • Large data sets {ATLAS: event = Mbyte; total = 10 Pbytes} • Experimental resolution • Acceptance • Systematics • Separating signal from background • Parameter estimation • Testing models (versus alternative?) • Search for signals: Setting limits or Discovery • SCMA and PHYSTAT
Differences • Bayes or Frequentism? • Background • Specific Astrophysics issues Time dependence Spatial structures Correlations Non-parametric methods Visualisation Cosmic variance • Blind analyses `
Bayesian versus Frequentism BayesianFrequentist
Bayesian versus Frequentism Bayesian Frequentist
Bayesianism versus Frequentism “Bayesians address the question everyone is interested in, by using assumptions no-one believes” “Frequentists use impeccable logic to deal with an issue of no interest to anyone”
Differences • Bayes or Frequentism? • Background • Specific Astrophysics issues Time dependence Spatial structures Correlations Non-parametric methods Visualisation Cosmic variance • Blind analyses `
What Astrophysicists do well • Glorious pictures • Scientists + Statisticians working together • Sharing data • Making data publicly available • Dealing with large data sets • Visualisation • Funding for Astrostatistics • Statistical software
Whirlpool Galaxy Width of Z0 3 light neutrinos
What Astrophysicists do well • Glorious pictures • Sharing data • Making data publicly available • Dealing with large data sets • Visualisation • Funding for Astrostatistics • Statistical software
What Particle Physicists now know • (ln L) = 0.5 rule • Unbinned Lmax and Goodness of fit • Prob (data | hypothesis) ≠ Prob (hypothesis | data) • Comparing 2 hypotheses • Δ(c 2) ≠c 2 • Bounded parameters: Feldman and Cousins • Use correct L (Punzi effect) • Blind analyses
ΔlnL = -1/2 rule If L(μ) is Gaussian, following definitions of σ are equivalent: 1) RMS of L(µ) 2) 1/√(-d2L/dµ2) 3) ln(L(μ±σ) = ln(L(μ0)) -1/2 If L(μ) is non-Gaussian, these are no longer the same “Procedure 3) above still gives interval that contains the true value of parameter μ with 68% probability” (Even Gaussian L(µ) might not) Heinrich: CDF note 6438 (see CDF Statistics Committee Web-page) Barlow: Phystat05
COVERAGE How often does quoted range for parameter include param’s true value? N.B. Coverage is a property of METHOD, not of a particular exptl result Coverage can vary with Study coverage of different methods of Poisson parameter , from observation of number of events n Hope for: 100% Nominalvalue
Coverage : L approach (Not frequentist) P(n,μ) = e-μμn/n! (Joel Heinrich CDF note 6438) -2 lnλ< 1 λ = P(n,μ)/P(n,μbest) UNDERCOVERS
Frequentist central intervals, NEVER undercover(Conservative at both ends)
Unbinned Lmax and Goodness of Fit? Find params by maximising L So larger L better than smaller L So Lmax gives Goodness of Fit?? Bad Good? Great? Monte Carlo distribution of unbinned Lmax Frequency Lmax
Lmax and Goodness of Fit? Example 1 Fit exponential to times t1, t2 ,t3 ……. [Joel Heinrich, CDF 5639] L = ln Lmax = -N(1 + ln tav) i.e. Depends only on AVERAGE t, but is INDEPENDENT OF DISTRIBUTION OF t (except for……..) (Average t is a sufficient statistic) Variation of Lmax in Monte Carlo is due to variations in samples’ average t , but NOT TO BETTER OR WORSE FIT pdf Same average t same Lmax t
Now for Likelihood When parameter changes from λ to τ = 1/λ (a’) L does not change dn/dt = 1/τ exp{-t/τ} and so L(τ;t) = L(λ=1/τ;t) because identical numbers occur in evaluations of the two L’s BUT (b’) So it is NOT meaningful to integrate L (However,………)
CONCLUSION: • NOT recognised statistical procedure • [Metric dependent: • τ range agrees with τpred • λ range inconsistent with 1/τpred] • BUT • Could regard as “black box” • Make respectable by L Bayes’ posterior • Posterior(λ) ~ L(λ)* Prior(λ) [and Prior(λ) can be constant]
P (Data;Theory) P (Theory;Data) Theory = male or female Data = pregnant or not pregnant P (pregnant ; female) ~ 3%
P (Data;Theory) P (Theory;Data) Theory = male or female Data = pregnant or not pregnant P (pregnant ; female) ~ 3% but P (female ; pregnant) >>>3%
P (Data;Theory) P (Theory;Data) HIGGS SEARCH at CERN Is data consistent with Standard Model? or with Standard Model + Higgs? End of Sept 2000 Data not very consistent with S.M. Prob (Data ; S.M.) < 1% valid frequentist statement Turned by the press into: Prob (S.M. ; Data) < 1% and therefore Prob (Higgs ; Data) > 99% i.e. “It is almost certain that the Higgs has been seen”
p-value ≠ Prob of hypothesis being correct After Conference Banquet speech: “Of those results that have been quoted as significant at the 99% level, about half have turned out to be wrong!” Supposed to be funny, but in fact is perfectly OK
PARADOX Histogram with 100 bins Fit 1 parameter Smin: χ2 with NDF = 99 (Expected χ2 = 99 ± 14) For our data, Smin(p0) = 90 Is p1 acceptable if S(p1) = 115? • YES. Very acceptable χ2 probability • NO. σp from S(p0 +σp) = Smin +1 = 91 But S(p1) – S(p0) = 25 So p1 is 5σ away from best value
χ2 with ν degrees of freedom? • ν = data – free parameters ? Why asymptotic (apart from Poisson Gaussian) ? a) Fit flatish histogram with y = N {1 + 10-6 cos(x-x0)} x0 = free param b) Neutrino oscillations: almost degenerate parameters y ~ 1 – A sin2(1.27 Δm2 L/E) 2 parameters 1 – A (1.27 Δm2 L/E)2 1 parameter Small Δm2
χ2 with ν degrees of freedom? 2) Is difference in χ2 distributed as χ2 ? H0 is true. Also fit with H1 with k extra params e. g. Look for Gaussian peak on top of smooth background y = C(x) + A exp{-0.5 ((x-x0)/σ)2} Is χ2H0 - χ2H1 distributed as χ2 withν = k = 3 ? Relevant for assessing whether enhancement in data is just a statistical fluctuation, or something more interesting N.B. Under H0 (y = C(x)) : A=0 (boundary of physical region) x0 and σ undefined
Is difference in χ2 distributed as χ2 ? • So need to determine the Δχ2 distribution by Monte Carlo • N.B. • Determining Δχ2 for hypothesis H1 when data is generated according to H0 is not trivial, because there will be lots of local minima • If we are interested in5σ significance level, needs lots of MC simulations
Getting L wrong: Punzi effect Giovanni Punzi @ PHYSTAT2003 “Comments on L fits with variable resolution” Separate two close signals, when resolution σvaries event by event, and is different for 2 signals e.g. 1) Signal 1 1+cos2θ Signal 2 Isotropic and different parts of detector give different σ 2) M (or τ) Different numbers of tracks different σM (or στ)
Punzi Effect Events characterised by xi andσi A events centred on x = 0 B events centred on x = 1 L(f)wrong = Π [f * G(xi,0,σi) + (1-f) * G(xi,1,σi)] L(f)right= Π [f*p(xi,σi;A) + (1-f) * p(xi,σi;B)] p(S,T) = p(S|T) * p(T) p(xi,σi|A) = p(xi|σi,A) * p(σi|A) = G(xi,0,σi) * p(σi|A) So L(f)right = Π[f * G(xi,0,σi) * p(σi|A) + (1-f) * G(xi,1,σi) * p(σi|B)] If p(σ|A) = p(σ|B), Lright = Lwrong but NOT otherwise
Punzi Effect • Punzi’s Monte Carlo for A : G(x,0, σA) • B : G(x,1, σB) • fA = 1/3 • Lwrong Lright • σAσB fAσffAσf • 1.0 1.0 0.336(3) 0.08 Same • 1.0 1.1 0.374(4) 0.08 0. 333(0) 0 • 1.0 2.00.645(6) 0.12 0.333(0)0 • 1 2 1.5 3 0.514(7) 0.14 0.335(2) 0.03 • 1.0 1 2 0.482(9) 0.09 0.333(0) 0 • 1) Lwrong OK for p(σA) = p(σB) , but otherwise BIASSED • 2) Lright unbiassed, but Lwrong biassed (enormously)! • 3) Lright gives smaller σf than Lwrong
Explanation of Punzi bias σA = 1σB = 2 A events with σ = 1 B events with σ = 2 x x ACTUAL DISTRIBUTION FITTING FUNCTION [NA/NB variable, but same for A and B events] Fit gives upward bias for NA/NB because (i) that is much better for A events; and (ii) it does not hurt too much for B events
Another scenario for Punzi problem: PID A B π K M TOF Originally: Positions of peaks = constantK-peak π-peak at large momentum σi variable, (σi)A ≠ (σi)Bσi ~ constant, pK ≠ pπ COMMON FEATURE: Separation / Error ≠ Constant Where else?? MORAL: Beware of event-by-event variables whose pdf’s do not appear in L
Avoiding Punzi Bias • Include p(σ|A) and p(σ|B) in fit (But then, for example, particle identification may be determined more by momentum distribution than by PID) OR • Fit each range of σi separately, and add (NA)i (NA)total, and similarly for B Incorrect method using Lwrong uses weighted average of (fA)j, assumed to be independent of j Talk by Catastini at PHYSTAT05
BLIND ANALYSES Why blind analysis? Selections, corrections, method Methods of blinding Add random number to result * Study procedure with simulation only Look at only first fraction of data Keep the signal box closed Keep MC parameters hidden Keep fraction visible for each bin hidden After analysis is unblinded, …….. * Luis Alvarez suggestion re “discovery” of free quarks
Conclusions • Common problems: scope for learning from each other Large data sets Separating signal Testing models Signal / Discovery • Targetted Workshops e.g. SAMSI: JanMay 2006 Banff: July 2006 (Limits with nuisance params; significance tests: signal-bgd sepn.) • Summer schools: Spain, FNAL,…… Thanks to Stefen Lauritzen, Lance Miller, Subir Sarkar, SAMSI, Roberto Trotta.………
Excellent Conference: Very big THANK YOU to Jogesh and Eric !