320 likes | 759 Views
On Discriminative vs. Generative classifiers: Naïve Bayes. Presenter : Seung Hwan, Bae. Andrew Y. Ng and Michael I. Jordan Neural Information Processing System (NIPS), 2001 (slides adapted from Ke Chen from University of Manchester and YangQiu Song from MSRA ) Total Citation: 831.
E N D
On Discriminative vs. Generative classifiers: Naïve Bayes Presenter : Seung Hwan, Bae
Andrew Y. Ng and Michael I. Jordan Neural Information Processing System (NIPS), 2001 (slides adapted from Ke Chen from University of Manchester and YangQiu Song from MSRA) Total Citation: 831
Generative vs. Discriminative Classifiers • Training classifiers involves estimating f: X->Y, or P(Y|X) • X: Training data, Y: Labels • Discriminative classifiers(also called ‘informative’ by Rubinstein & Hastie): • Assume some functional form from for P(Y|X) • Estimate parameters of P(Y|X) directly from training data • Generative classifier • Assume some functional from for P(X|Y), P(X) • Estimate parameters of P(X|Y), P(X) directly from training data • Use Bayes rule to calculate
Generative Model • Color • Size • Texture • Weight • …
Discriminative Model • Logistic Regression • Color • Size • Texture • Weight • …
Comparison • Generative models • Assume some functional form for P(X|Y), P(Y) • Estimate parameters of P(X|Y), P(Y) directly from training data • Use Bayes rule to calculate P(Y|X=x) • Discriminative models • Directly assume some functional form for P(Y|X) • Estimate parameters of P(Y|X) directly from training data Y Y X1 X1 X2 X2 Naïve Bayes Generative Logistic Regression Discriminative
Probability Basics • Prior, conditional and joint probability for random variables • Prior probability: • Conditional probability: • Joint probability: • Relationship: • Independence: • Bayesian Rule
Probabilistic Classification • Establishing a probabilistic model for classification • Discriminative model Discriminative Probabilistic Classifier
Probabilistic Classification • Establishing a probabilistic model for classification (cont.) • Generative model Generative Probabilistic Model for Class 2 Generative Probabilistic Model for Class L Generative Probabilistic Model for Class 1
Probabilistic Classification • MAP classification rule • MAP: Maximum APosterior • Assign x to c* if • Generative classification with the MAP rule • Apply Bayesian rule to convert them into posterior probabilities • Then apply the MAP rule
Naïve Bayes • Bayes classification • Difficulty: learning the joint probability • If the number of feature n is large or when a feature can take on a large number of values, then basing such a model on probability tables is infeasible.
Naïve Bayes • Naïve Bayes classification • Assume that all input attributes are conditionally independent! • MAP classification rule: for
Naïve Bayes • Naïve Bayes Algorithm (for discrete input attributes) • Learning phase: Given a train set S, Output: conditional probability tables; for elements • Test phase: Given an unknown instance Look up tables to assign the label c* to X’ if
Example • Example: Play Tennis
Example • Learning phase P(Play=Yes) = 9/14 P(Play=No) = 5/14
Example • Test Phase • Given a new instances x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) • Look up tables • MAP rule P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the factP(Yes|x’) < P(No|x’), we label x’ to be “No”.
Example • Test Phase • Given a new instance, • x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) • Look up tables • MAP rule P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14 P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the factP(Yes|x’) < P(No|x’), we label x’ to be “No”. 19
Relevant Issues • Violation of Independent Assumption • For many real world tasks, • Nevertheless, naïve Bayes works surprisingly well anyway! • Zero conditional probability problem • In no example contains the attribute value • In this circumstance, during test • For a remedy, conditional probabilities estimated with
Relevant Issues • Continuous-valued Input Attributes • Numberless vales for an attribute • Conditional probability modeled with the normal distribution • Learning phase: Output: normal distributions and • Test phase: • Calculate conditional probabilities with all the normal distribution • Apply the MAP rule to make a decision
Advantages of Naïve Bayes • Naïve Bayes based on the independent assumption • A small amount of training data to estimate parameters (means and variances of the variable) • Only the variances of variables for each class need to be determined and not the entire covariance matrix • Test is straightforward; just looking up tables or calculating conditional probabilities with normal distribution
Conclusion • Performance competitive to most of state-of-art classifiers even in presence of violating independence assumption • Many successful application, e.g., spam mail fitering • A good candidate of a base learner in ensemble learning • Apart from classification, naïve Bayes can do more…