1 / 15

Searching for Single Top Using Decision Trees

Searching for Single Top Using Decision Trees. G. Watts (UW) For the DØ Collaboration 5/13/2005 – APSNW Particles I. SingleTop Challenges. Overwhelming Background!. Straight Cuts. (and counting experiments). Difficulty taking advantage of correlations. Multivariate Cuts.

gafna
Download Presentation

Searching for Single Top Using Decision Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Searching for Single Top Using Decision Trees G. Watts (UW) For the DØ Collaboration 5/13/2005 – APSNW Particles I

  2. SingleTop Challenges Overwhelming Background! Straight Cuts (and counting experiments) Difficulty taking advantage of correlations Multivariate Cuts (and shape fitting) Designed to take advantage of correlations and irreducible backgrounds

  3. b Asymmetries in t-Channel Production Lots of variables give small separation Pair Production (Use ME, phase space, etc.)

  4. Combine Variables! Multivariate Likelihood Fit 7 variables means 7 dimensions… Neural Network Many inputs and a single output Trained on signal and background sample Well understood and mostly accepted in HEP Decision Tree Many inputs and a single output Trained on signal and background sample Used mostly in life sciences & business (MiniBOONE - physics/0408124).

  5. Decision Tree Trained Decision Tree (Binned Likelihood Fit) (Limit)

  6. Internals of a Trained Tree “You can see a decision tree” “Rooted Binary Tree” Every Event belongs to a single leaf node!

  7. Training Determine a branch point Calculate Gini Improvement As a function of a interesting variable (HT in this case) Choose the largest improvement as the cut point Repeat for all interesting variables HT, Jet pT, Angular Variables, etc. Best improvement is this node’s decision.

  8. Gini Process Requires a Variable to optimize separation. Purity Ws – Weight of Signal Events Wb – Weight of Background Events Gini G is zero for pure background or signal!

  9. Data (S) S1 S2 Gini Improvement For each node GI = G(S) – G(S1) – G(S2) Repeat the process for each subdivision of data

  10. And Cut… Stop process and generate a leaf. We used statistical sample error (# of events) Determine the Purity of each leaf Use Tree as Estimator of Purity Each event belongs to a unique leaf The leaf’s purity is the estimator of the event

  11. DT in the Single Top Search DØ Backgrounds: W+Jets, QCD, top Pair Production Fake Leptons DT tt l+jets Two DTs Trained on signal and Wbb as background Trained on signal and tt  lepton + jets as background DT Wbb This part is identical to a NN based analysis Separate DT for muon & electron 2d Histogram used in binned likelihood fit

  12. Results Expected Limits s-channel: 4.5 pb (NN: 4.5) t-channel: 6.4 pb (NN: 5.8) Actual Limits s-channel: 8.3 pb (NN: 6.4) t-channel: 8.1 pb (NN: 5.0) Expected Results Close to NN

  13. Future of the Analysis Use a Single Decision Tree Train it against all backgrounds Pruning Train until each leaf has only a single event Recombine leaves (pruning) using statistical estimator Boosting Combine multiple trees, each weighted Train trees on event samples that have mis-classified event weights enhanced

  14. References & Introduction MiniBooNE Paper: hep-ex/0408124 Recent Advances in Predictive (Machine) Learning Jerome H. Friedman, Conf. Proceedings I have then linked and other on my web page http://d0.phys.washington.edu/~gwatts/research/conferences

  15. Decision Trees are good… Model is obvious in form of 2d binary tree. Not as sensitive to outliers in input data as other methods Easily accommodate integer inputs (NJets) or missing variable inputs. Easy to implement (several months to go from scratch to working code) Decision Trees aren’t so good… Well understood input variables are a must Similar for Neural Networks, of course. Minor changes in the input events can make for major changes in tree layout and results. Estimator is not a continuous function Don’t have to deal with hidden nodes Separate training of background or other issues Conclusions

More Related