140 likes | 154 Views
Retraining Kaon Neural Net. Kalanand Mishra University of Cincinnati. Motivation. This exercise is aimed at improving the performance of KNN selectors. Kaon PID control samples are obtained from D* decay: D* + D 0 [K - + ] s +
E N D
Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati
Motivation • This exercise is aimed at improving the performance of KNN selectors. • Kaon PID control samples are obtained from D* decay:D*+D0 [K-+]s+ Track selection and cuts used to obtain the control sample is described in detail in BAD 1056 ( author : Sheila Mclachlin ). • The original Kaon neural net (KNN) training was done by Giampiero Mancinelli & Stephen Sekula in circa 2000, analysis 3, using MC events ( they didn’t use PID control sample). They used 4 neural net input variables: likelihoods from SVT, DCH, DRC (global) and K momentum. • I intend to use two additional input variables: track based DRC likelihood and polar angle () of kaon track. • I have started the training with PID control sample (Run 4). I will repeat the same exercise for MC sample and also truth-matched MC events. • Due to higher statistics and better resolution in the control sample available now, I started with a purer sample ( by applying tighter cuts). • Many thanks to Kevin Flood and Giampiero Mancinelli for helping me getting started and explaining the steps involved.
K-π+ invariant mass in control sample P* > 1.5 GeV/c Purity within 1 = 97 % No P* cut Purity within 1 = 96 % Conclusion : P* cut improves signal purity. We will go ahead with this cut. Other cuts: K-π+ vertex prob > 0.01 and require DIRC acceptance.
| m D* - mD0 | distribution in control sample Conclusion : P* cut doesn’t affect ∆m resolution.
Momentum and cosdistributions Kaon P Very similar distributions for K and π Pion P Almost identical dist. for K and π Kaon cos Pion cos
Plab vs cos distribution Kaon Pion Conclusion : Almost identical distributions for Kaon and Pion except on the vertical left edge where soft pions make slightly fuzzy boundary.
Purity as a function of Kaon momentum Purity = 97 % Purity = 98 % Purity = 93 % Purity = 98 % Purity = 98 % Purity = 98 %
NN input variables Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh. scaled Pscaled scaled not a input var SVT lh
NN input variables Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh. DCH lh DRC-glb lh DRC-trk lh
A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and NN output at optimal point
A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and Signal performance
A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and Background performance
A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and Performance vs number of hidden nodes Saturates at around 18
Summary • I have set up the machinery and started training K neural net. • One way to proceed is to include P and as input variables after flattening the sample in P - plane ( to get rid of the in-built kinematic bias spread across this plane). • The other way is to do training in bins of P and cos. This approach seems more robust but comes at the cost of more overheads and requires more time and effort. Also, this approach may or may not have performance advantage over the first approach. • By analyzing the performance of neural net over a sample using both of these approaches, we will decide which way to go. • The performance of the neural net will be analyzed in terms of kaon efficiency vs. pion rejection [ and also kaon eff vs. pion rej as a function of both momentum and ]. • Stay tuned !