1 / 21

Discriminative Training of Chow-Liu tree Multinet Classifiers

Discriminative Training of Chow-Liu tree Multinet Classifiers. Huang, Kaizhu Dept. of Computer Science and Engineering, CUHK. Outline. Background Classifiers Discriminative classifiers Generative classifiers Bayesian Multinet Classifiers Motivation

quinta
Download Presentation

Discriminative Training of Chow-Liu tree Multinet Classifiers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discriminative Training of Chow-Liu tree Multinet Classifiers Huang, Kaizhu Dept. of Computer Science and Engineering, CUHK

  2. Outline • Background • Classifiers • Discriminative classifiers • Generative classifiers • Bayesian Multinet Classifiers • Motivation • Discriminative Bayesian Multinet Classifiers • Experiments • Conclusion

  3. SVM Discriminative Classifiers • Directly maximize a discriminative function

  4. P2(x|C2) P1(x|C1) Generative Classifiers • Estimate the distribution for each class, and then use Bayes rule to perform classification

  5. Comparison Example of Missing Information: From left to right: Original digit, Cropped and resized digit, 50% missing digit, 75% missing digit, and occluded digit.

  6. Comparison (Continue) • Discriminative Classifiers cannot deal with missing information problems easily. • Generative Classifiers provide a principled way to handle missing information problems. • When is missing, we can use MarginalizedP1 and P2 to perform classification

  7. Handling Missing Information Problem SVM TJT: a generative model

  8. Motivation • It seems that a good classifier should combine the strategies of discriminative classifiers and generative classifiers • Our work trains the one of the generative classifier: the generativeBayesian Multinet classifier in a discriminative way

  9. Roadmap of our work

  10. Discriminative Classifiers HMM and GMM Generative Classifiers Discriminative training 1. 2. How our work relates to other work? Jaakkola and Haussler NIPS98 Difference: Our method performs a reverse process: From Generative classifiers to Discriminative classifiers Beaufays etc., ICASS99, Hastie etc., JRSS 96 Difference: Our method is designed for Bayesian Multinet Classifiers, a more general classifier.

  11. Pre-classified dataset Sub-dataset D1 for Class I Sub-dataset D2 for Class 2 Estimate the distribution P1 to approximate D1 accurately Estimate the distribution P2 to approximate D2 accurately Use Bayes rule to perform classification Problems of Bayesian Multinet Classifiers Comments: This framework discards the divergence information between classes.

  12. Our Training Scheme

  13. Mathematic Explanation • Bayesian Multinet Classifiers (BMC) • Discriminative Training of BMC

  14. Mathematic Explanation

  15. Finding P1 and P2

  16. Finding P1 and P2

  17. Experimental Setup • Datasets • 2 benchmark datasets from UCI machine learning repository • Tic-tac-toe • Vote • Experimental Environments • Platform:Windows 2000 • Developing tool: Matlab 6.5

  18. Error Rate

  19. Convergence Performance

  20. Conclusion • A discriminative training procedure for generative Bayesian Multinet Classifiers is presented • This approach improves the recognition rate for two benchmark datasets significantly • The theoretic exploration on the convergence performance of this approach is on the way.

More Related