240 likes | 344 Views
Standard model, Tracking, vertexing, b-tagging, taus. Standard Model. Focussed on early analysis of W and Z xsect W and Z xsect with electrons W and Z xsect with muons Many aspects of analysis are data-driven Triggering and reconstruction efficiencies from tag and probe
E N D
Standard Model • Focussed on early analysis of W and Z xsect • W and Z xsect with electrons • W and Z xsect with muons • Many aspects of analysis are data-driven • Triggering and reconstruction efficiencies from tag and probe • Electron fake rate from QCD ET-miss spectrum • Very tightly coupled to combined performance groups • Will be application of standard performance analysis eg efficiencies of isolated lepton trigger • Some work on taus • Discuss more • No mention of “pure” QCD, this still wide open
Interplay between groups • Can not be exhaustive illustrate this interplay through several inputs/outputs • Well operating detector (+DAQ+offline) • Signal reconstruction • Trigger (HLT pass-through at beginning) • Alignment • EM energy scale • EM calo intercalibration • Material in front of EM calo • Trigger and identification efficiencies • MC tuning (detector description, physics param., …) • Background control with data • Estimate of all systematic uncertainties Detector (+trigger) egamma SM group • Not a step by step program !! Iterations needed… • Need good cooperation between communities W/Z electron channel, F.Hubaut
Z(ee) extraction • Fast and robust extraction of the signal in early data taking phase • Trigger not discussed here (see dedicated meeting) • Large part of initial bandwidth dedicated to leptons, no isolation criteria • Selection steps • e10 trigger (single electron trigger to measure efficiency from data, see next slide) • Kinematics: 2 EM clusters pT>15 GeV, |h|<2.47, exclude largely around crack (1.3< |h| <1.6) • Loose identification cuts:robustness when detector perf. not understood in detail. Can even use simple criteria based on EM calorimeter only unbiased tracker studies • 24 800 ± 200 signal events with 50 pb-1 CSC • Large sample, stat. error <1% • At 10 TeV, reduced by ~1/3rd • Data-driven background determination • Fit exponential slope after kinematical cuts normalise on side-bands • 2300 ± 400 events estimated for 50 pb-1 W/Z electron channel, F.Hubaut
Tag electron (tight cuts) Zee Probe electron (pass cuts ?) Z(ee) sample Efficiency determination with data More details in eg session • Measure trigger/reconstruction/identification efficiencies with Z(ee) data sample • Well known Tag&Probe method • Single lepton trigger to allow unbiased probe • Background contamination taken into account Medium identification efficiency: • Reproduce differential structures • 2% error on overall efficiency per electron with 50 pb-1 (Integrated over whole spectrum. Mainly limited by Z sample statistics.) W/Z electron channel, F.Hubaut
Z(ee) cross section measurement s = (Nsignal- Nbackground) / (A ·εtot·Lumi) Acceptance uncertainty (mainly limited knowledge of underlying physics: ISR, PDFs, …) determined with MC Previous slides • Overall uncertainty for 50 pb-1: ± 0.8% (stat) ± 3.5% (syst) ± dL/L • Systematic errors dominate, even with 50 pb-1 • Main systematics from electron selection efficiency (except luminosity) • Comparable to muon channel • Extrapolation to 1 fb-1 • estimated directly on data • limited to ~1.5% by acceptance uncertainties (PDF, ISR, …) • use differential cross sections (vs h and pT) W/Z electron channel, F.Hubaut
W(en) extraction • Selection steps • e20 trigger • Electron 1 EM cluster with pT>25 GeV, |h|<2.47 + exclude crack region Medium identification criterion • Missing ET >25 GeV 1 electron only: increase pT cut and tigthen identification criteria • Not discussed here, but need detailed detector understanding CSC • 217 100 ± 400 signal events with 50 pb-1 • Large sample, stat. error <<1% • At 10 TeV, reduced by ~1/3rd MTW (GeV) • QCD background level and shape must be estimated directly with data W/Z electron channel, F.Hubaut
QCD fakes Fit in g sample Side-band Signal region Data driven background determination • Dominant background : jets • Large uncertainties, difficult to simulate, poor MC statistics • Must be measured directly on data • Principle of the method • QCD enriched sample (98%): g trigger (g20) and same kinematical EM cluster selection missing ETshape parametrization • Normalise to side-band in electron sample (Zee removed) • Uncertainty on background contamination ~ 4% (9200 events) with 50 pb-1 (limited by MC stat.) W/Z electron channel, F.Hubaut
W(en) cross section measurement s = (Nsignal- Nbackground) / (A ·εtot·Lumi) • Acceptance uncertainty : • only theoretical (ISR, PDFs, …) • impact of missing ET scale and resolution uncertainties has to be quantified • Overall uncertainty for 50 pb-1: ± 0.2% (stat) ± 5% (syst) ± dL/L • Systematic errors dominate largely with 50 pb-1 • main from background uncertainty (except luminosity) • Luminosity uncertainty vanishes in s ratios, e.g. sW/sZ • Comparable precision to muon channel (for which background less important, Zmm dominates) • Extrapolation to 1 fb-1 • estimated directly on data • stringent test of QCD • limited to ~2.5% by acceptance uncertainties (PDF, ISR, …) W/Z electron channel, F.Hubaut
Hadronic taus: means for identification Tracking: object withlow track multiplicity ( 1 or 3 ) tracks more collimated than for “average jet”, (invariant mass, weighted width of tracks system) decay length makes it possible to use impact parameter and transverse flight path (three-prong) isolation cone from other tracks Calorimetry: collimated deposition in EM (radius, width in strips) use shower shape variables strong EM component for single prong (50% energy by 0) reconstruct 0subclusters isolation cone both EM and HAD components E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 10
Reconstruction Track-seeded and calo-seeded algorithms integrated for rel.14.2.0 A.Kaczmarska, S. Lai, N. Meyer, L. Janyst • „Track-seed and calo-seed: • - use good quality tracks (pT>6 GeV) asinitial seed • - candidates with 1-8 quality tracks (pT>1 GeV) in R<0.2 from the seed • - the ,f using pT weighting of tracks, check charge consistency (|Q|≤ 2) • find matching cone 0.4 TopoJets (>10 GeV, DR < 0.2) as calo-seed • ET (calorimetric) using H1-style calibration on cells from calo-seed • ETeflow with energy-flow method (EM calo - separating neutral/charged sources of energy) • - reconstruct p0 subclusters • „Calo seed only”: • - use cone 0.4 TopoJets (>10 GeV) as calo-seed, matching seed not found from tracking • define the ,f using calo-seed (h corrected for z vertex), • looser tracks-quality selection, track pT> 1 GeV • - ET (calorimetric) using H1-style calibration on cells from calo-seed „Track-seed only”: small fraction (few %) of total only E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 11
Reconstruction + = „all calo-seed” Only calo-seed Both seeds QCD J2 pThard=35-70 GeV Only track-seed QCD J2 sample pT=35-70 GeV Z tt Overall purity in the sample Z->tt: 57 % for „both seeds” (yellow), 23 % for „only calo seeds” (red) E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 12
Tracking-1 • Many new changes in release 14 • Improved tuning of tracking following CSC • New functionality: BackTracking, TRTOnly, ConversionFinder, V0Finder, LowPtTracking • Geometry updates • Detector condition information introduced • Tracking for startup: single-beam, beam-halo Tracking summary M.Elsing
Tracking-2 • Handling real data • Noisy SCT and TRT modules • Condition service introduced, allows bad channel/module masking • This links in with monitoring work from Saverio, Dan, Aidan and Mary • Running in different configurations: Pixel+SCT+TRT or SCT+TRT • Need tuning Tracking summary M.Elsing
Tracking-3 • Mass resolution after alignment is a worry • Z-mass is sensitive to weak eigenmodes Tracking summary M.Elsing
Vertexing • Primary vertex and beamspot • How to use beamspot to find PV and then PV find BS • Pileup • Vertex code can handle pileup • But more testing required, eg identification of correct PV for b-tagging, identification of correct PV in high levels of pileup • Technical issues • What to do? • Study pileup in minbias – Craig • Study reconstruction of PV in pileup for b-tagging --? • Maybe using a top sample? Vertex summary: A.Wildauer
B-tagging at startup • Avoid using PDFs • Related to PV finding discussed earlier b-tagging summary L.Vacavant
Commissioning • Once we have some understanding of tracking/alignment (previous talks) • First taggers: • Track counting: no calibration • JetProb: negative d0/σ from data • SV0: simple inclusive secondary vertex • taggers relying on LR for b,u(,c) hypotheses next • Simple baseline: switch on progressively the extra features • V0 rejection • Dedicated treatment for shared tracks, other categories • Samples: • min.bias, QCD: resolution function for JetProb • QCD, bbbar: JetProb, SV0 • muon+jet: b-tagging efficiency measured in data • ttbar • … Monitor with jet events b-tagging summary L.Vacavant
Early jet taggers • DCP with BS using jets • Look at track jets • Optimise tracks for use in this • Introduce PV, remove V0s and conversions • Secondary vertex taggers • Introduce L3D/sigma, no pdf used • Very strong links with tracking groups • Optimise track selection, tracking performance in jets, vertex reconstruction b-tagging summary L.Vacavant
What to do? • Kenny • Top in dileptons • Start looking at robust early tagger – jetprob • Mary • SV0 and then SV1 etc • Saverio will talk to Richard Hawkings • Tracking interface via Craig and Will
TODO List for taus (I) E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 22
TODO List for taus (II) E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 23
TODO List for taus (III) E.Richter-Was, UJ/IFJ-PAN ATLAS CP Week, 9 June 2008 24