120 likes | 322 Views
Boosting. Rong Jin. Bagging. D. Boostrap Sampling. …. D 1. D 2. D k. h 1. h 2. h k. Inefficiency with Bagging. Inefficient boostrap sampling: Every example has equal chance to be sampled No distinction between “easy” examples and “difficult” examples.
E N D
Boosting Rong Jin
Bagging D Boostrap Sampling … D1 D2 Dk h1 h2 hk Inefficiency with Bagging • Inefficient boostrap sampling: • Every example has equal chance to be sampled • No distinction between “easy” examples and “difficult” examples • Inefficient model combination: • A constant weight for each classifier • No distinction between accurate classifiers and inaccurate classifiers
Improve the Efficiency of Bagging • Better sampling strategy • Focus on the examples that are difficult to classify • Better combination strategy • Accurate model should be assigned larger weights
+ + Classifier3 Classifier1 Classifier2 No training mistakes !! May overfitting !! Training Examples Mistakes Mistakes X1 Y1 X1 Y1 X3 Y3 X1 Y1 X2 Y2 X3 Y3 X4 Y4 Intuition
Sample Sample x5, y5 x3, y3 x1, y1 x2, y2 x3, y3 x4, y4 x5, y5 x5, y5 x3, y3 x1, y1 x4, y4 x3, y3 x2, y2 x1, y1 x1, y1 x1, y1 x2, y2 x3, y3 x4, y4 x5, y5 D0: 1/5 1/5 1/5 1/5 1/5 Training Training Update Weights h1 h2 h1 D2: D1: 2/9 2/7 1/9 1/7 4/9 2/7 1/7 1/9 1/7 1/9 h2 Update Weights Sample … AdaBoost Example: t=ln2
How To Choose t in AdaBoost? • How to construct the best distribution Dt+1(i) • Dt+1(i) should be significantly different from Dt(i) • Dt+1(i) should create a situation that classifier ht performs poorly
Optimization View for Choosing t • ht(x): x{1,-1}; a base (weak) classifier • HT(x): a linear combination of basic classifiers • Goal: minimize training error • Approximate error switha exponential function
AdaBoost: Greedy Optimization Fix HT-1(x), and solve hT(x) and t
Empirical Study of AdaBoost • AdaBoosting decision trees • Generate 50 decision trees by AdaBoost • Linearly combine decision trees using the weights of AdaBoost • In general: • AdaBoost= Bagging > C4.5 • AdaBoostusually needs less number of classifiers than Bagging
Bia-Variance Tradeoff for AdaBoost • AdaBoost can reduce both variance and bias simultaneously variance bias single decision tree Bagging decision tree AdaBoosting decision trees