640 likes | 769 Views
Boosting. Shai Raffaeli Seminar in mathematical biology. http://www1.cs.columbia.edu/~freund/. Male. Human Voice. Female. Toy Example. Computer receives telephone call Measures Pitch of voice Decides gender of caller. mean1. mean2. var1. var2. Probability. Generative modeling.
E N D
Boosting Shai Raffaeli Seminar in mathematical biology http://www1.cs.columbia.edu/~freund/
Male Human Voice Female Toy Example • Computer receives telephone call • Measures Pitch of voice • Decides gender of caller
mean1 mean2 var1 var2 Probability Generative modeling Voice Pitch
No. of mistakes Discriminative approach Voice Pitch
mean2 mean1 Probability No. of mistakes Ill-behaved data Voice Pitch
Machine Learning Decision Theory Statistics Traditional Statistics vs. Machine Learning Predictions Actions Data Estimated world state
Feature vectors Binary labels {-1,+1} Positive weights A weighted training set
Non-negative weights sum to 1 Binary label Feature vector (x1,y1,w1),(x2,y2,w2) … (xn,yn,wn) Weighted training set instances labels x1,x2,x3,…,xn y1,y2,y3,…,yn The weak requirement: A weak learner A weak rule weak learner h h
weak learner h1 (x1,y1,w1), … (xn,yn,wn) (x1,y1,1/n), … (xn,yn,1/n) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) (x1,y1,w1), … (xn,yn,wn) weak learner h2 h8 hT h9 h7 h5 h3 h4 h6 The boosting process Sign[] + + + Final rule: a1 h1 a2 h2 aT hT
Adaboost • Binary labels y = -1,+1 • margin(x,y) = y [St atht(x)] • P(x,y) = (1/Z) exp (-margin(x,y)) • Given ht, we choose at to minimize S(x,y) exp (-margin(x,y))
Adaboost Freund, Schapire 1997
Main property of adaboost • If advantages of weak rules over random guessing are: g1,g2,..,gTthen in-sample error of final rule is at most
Adaboost as gradient descent • Discriminator class: a linear discriminator in the space of “weak hypotheses” • Original goal: find hyper plane with smallest number of mistakes • Known to be an NP-hard problem (no algorithm that runs in time polynomial in d, where d is the dimension of the space) • Computational method: Use exponential loss as a surrogate, perform gradient descent.
Prediction = Margin = - Cumulative # examples w - + + + - Mistakes Correct - + + + - - - - + Correct Mistakes Margin Margins view Project
Adaboost = Logitboost Brownboost 0-1 loss Margin Mistakes Correct Adaboost et al. Loss
One coordinate at a time • Adaboost performs gradient descent on exponential loss • Adds one coordinate (“weak learner”) at each iteration. • Weak learning in binary classification = slightly better than random guessing. • Weak learning in regression – unclear. • Uses example-weights to communicate the gradient direction to the weak learner • Solves a computational problem
What is a good weak learner? • The set of weak rules (features) should be flexible enough to be (weakly) correlated with most conceivable relations between feature vector and label. • Small enough to allow exhaustive search for the minimal weighted training error. • Small enough to avoid over-fitting. • Should be able to calculate predicted label very efficiently. • Rules can be “specialists” – predict only on a small subset of the input space and abstain from predicting on the rest (output 0).
Y no no yes yes 5 X 3 Decision Trees -1 +1 X>3 -1 Y>5 -1 -1 +1
Y -1 +1 +0.2 -0.1 +0.1 X>3 no yes -1 -0.3 +0.1 sign -0.1 Y>5 no yes -0.3 +0.2 X Decision tree as a sum -0.2 -0.2
Y +0.2 Y<1 -0.1 +0.1 -1 +1 no yes X>3 +0.7 0.0 no yes -0.3 +0.1 -0.1 -1 sign Y>5 no yes -0.3 +0.2 +1 X An alternating decision tree -0.2 +0.7
Example: Medical Diagnostics • Cleve dataset from UC Irvine database. • Heart disease diagnostics (+1=healthy,-1=sick) • 13 features from tests (real valued and discrete). • 303 instances.
Curious phenomenon Boosting decision trees Using <10,000 training examples we fit >2,000,000 parameters
Explanation using margins 0-1 loss Margin
No examples with small margins!! Explanation using margins 0-1 loss Margin
Fraction of training example with small margin Probability of mistake Size of training sample VC dimension of weak rules Theorem Schapire, Freund, Bartlett & Lee Annals of stat. 98 For any convex combination and any threshold No dependence on number of weak rules that are combined!!!
Applications of Boosting • Academic research • Applied research • Commercial deployment
Academic research % test error rates
Schapire, Singer, Gorin 98 Applied research • “AT&T, How may I help you?” • Classify voice requests • Voice -> text -> category • Fourteen categories Area code, AT&T service, billing credit, calling card, collect, competitor, dial assistance, directory, how to dial, person to person, rate, third party, time charge ,time
Examples • Yes I’d like to place a collect call long distance please • Operator I need to make a call but I need to bill it to my office • Yes I’d like to place a call on my master card please • I just called a number in Sioux city and I musta rang the wrong number because I got the wrong party and I would like to have that taken off my bill • collect • third party • calling card • billing credit
Word occurs Word does not occur Weak rules generated by “boostexter” Third party Collect call Calling card Category Weak Rule
Results • 7844 training examples • hand transcribed • 1000 test examples • hand / machine transcribed • Accuracy with 20% rejected • Machine transcribed: 75% • Hand transcribed: 90%
Commercial deployment Freund, Mason, Rogers, Pregibon, Cortes 2000 • Distinguish business/residence customers • Using statistics from call-detail records • Alternating decision trees • Similar to boosting decision trees, more flexible • Combines very simple rules • Can over-fit, cross validation used to stop
Summary • Boosting is a computational method for learning accurate classifiers • Resistance to over-fit explained by margins • Underlying explanation – large “neighborhoods” of good classifiers • Boosting has been applied successfully to a variety of classification problems
binding sites regulators DNA mRNA transcript Measurable quantity Gene Regulation • Regulatory proteins bind to non-coding regulatory sequence of a gene to control rate of transcription
mRNA transcript Protein folding ribosome Protein sequence From mRNA to Protein Nucleus wall
regulator Protein Transcription Factors
Microarrays measure mRNA transcript expression levels for all of the ~6000 yeast genes at once. • Very noisy data • Rough time slice over all compartments of many cells. • Protein expression not observed
TF MTF SM TF MTF Partial “Parts List” for Yeast Many known and putative • Transcription factors • Signaling moleculesthat activate transcription factors • Known and putative binding site “motifs” • In yeast, regulatory sequence = 500 bp upstream region
Microarray Image R1 R2 R3 R4 ….. Rp “Parent” gene expression G1 G2 G1 G2 Target gene expression G3 G3 G4 G4 … Gt … Binding sites (motifs)in upstream region Gt GeneClass: Problem Formulation M. Middendorf, A. Kundaje, C. Wiggins, Y. Freund, C. Leslie. Predicting Genetic Regulatory Response Using Classification. ISMB 2004. • Predicttarget generegulatory response from regulator activity and binding site data
+1 -1 0 Role of quantization By Quantizing expression into three classes We reduce noise but maintain most of signal Weighting +1/-1 examples linearly with Expression level performs slightly better.
Problem setup • Data point = Target gene X Microarray • Input features: • Parent state {-1,0,+1} • Motif Presence {0,1} • Predict output: • Target Gene {-1,+1}