190 likes | 207 Views
Co-operative Training in Classifier Ensembles. Rozita Dara PAMI Lab University of Waterloo. Outline. Introduction Sharing Training Resources Sharing Training Patterns Sharing Training Algorithms Sharing Training Information Sharing Training Information: An Algorithm Experimental Study
E N D
Co-operative Training in Classifier Ensembles Rozita Dara PAMI Lab University of Waterloo
Outline • Introduction • Sharing Training Resources • Sharing Training Patterns • Sharing Training Algorithms • Sharing Training Information • Sharing Training Information: An Algorithm • Experimental Study • Discussion and Conclusions
Introduction • Multiple Classifier Systems provide • Improved Performance • Better Reliability and Generalization • Multiple Classifier Systems motivations include • Empirical Observation • Problem decomposed naturally from using various sensors • Avoid making commitments to arbitrary initial conditions or parameters Co-operative Training in Classifier Ensembles
Introduction (cntd…) “Combining identical classifiers will not lead to improved performance.” • Importance of creating diverse classifiers • How does the amount of “sharing” between classifiers affect the performance? Co-operative Training in Classifier Ensembles
Sharing Training Resources • A measure of the degree of co-operation between various classifiers. • Sharing Training Patterns • Sharing Training Algorithms • Sharing Training Information Co-operative Training in Classifier Ensembles
Sharing Training Patterns Co-operative Training in Classifier Ensembles
Sharing Training Algorithms Co-operative Training in Classifier Ensembles
Sharing Training Information Co-operative Training in Classifier Ensembles
Training • Training each component independently • Optimize individual components, may not lead to overall improvement • Collinearity, high correlation between classifiers • Components, under-trained or over-trained Co-operative Training in Classifier Ensembles
Training (cntd…) • Adaptive training • Selective: Reducing correlation between components • Focused: Re-training focuses on misclassified patterns. • Efficient: Determined the duration of training Co-operative Training in Classifier Ensembles
Adaptive Training: Main loop • Share Training Information between members of the ensemble • Incremental learning • Evaluation of training to determine the re-training set Co-operative Training in Classifier Ensembles
Adaptive Training: Training • Save classifier if it performs well on the evaluation set • Determine when to terminate training for each module Co-operative Training in Classifier Ensembles
Adaptive Training: Evaluation • Train aggregation modules • Evaluate training sets for each classifier • Compose new training data Co-operative Training in Classifier Ensembles
Adaptive Training: Data Selection • New training data are composed by concatenating • Errori: Misclassified entries of training data for classifier i. • Correcti: Random choice of R*(P*δ_i) correctly classified entries of the training data for classifier i. Co-operative Training in Classifier Ensembles
Results Five one-hidden layer BP classifiers • Training used partially disjoint data sets • No optimization is performed for the trained networks • The parameters of all the networks are maintained for all the classifiers that are trained • Three data sets • 20 Class Gaussian • Satimages Co-operative Training in Classifier Ensembles
Results (cntd…) Co-operative Training in Classifier Ensembles
Conclusions • Exchange of information during training would allow for a informed fusion process • Enhance diversity amongst classifiers • Algorithms that share training information can improve overall classification accuracy. Co-operative Training in Classifier Ensembles
Conclusions (cntd…) Co-operative Training in Classifier Ensembles
References Co-operative Training in Classifier Ensembles