1 / 46

Pattern Classification & Decision Theory

Pattern Classification & Decision Theory. How are we doing on the pass sequence?. Bayesian regression and estimation enables us to track the man in the striped shirt based on labeled data Can we track the man in the white shirt? Not very well.

Download Presentation

Pattern Classification & Decision Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern Classification & Decision Theory

  2. How are we doing on the pass sequence? • Bayesian regression and estimation enables us to track the man in the striped shirt based on labeled data • Can we track the man in the white shirt? Not very well. Regression fails to identify that there really are two classes of solution Hand-labeled horizontal coordinate, t Feature, x

  3. Decision theory • We wish to classify an input x as belonging to one of K classes, where class k is denoted • Example: Buffalo digits, 10 classes, 16x16 images, each image x is a 256-dimensional vector, xm [0,1]

  4. Decision theory • We wish to classify an input x as belonging to one of K classes, where class k is denoted • Partition the input space into regions , , …, so that if x , our classifier predicts class • How should we choose the partition? • Suppose x is presented with probability p(x) and the distribution over the class labels given x is p( |x) • Then, p(correct) = Skx p(x) p( |x)dx • This is maximized by assigning each x to the region whose class maximizes p( |x)

  5. 3 f(x) 2 1 0 Three approaches to pattern classification • Discriminative and non-probabilistic • Learn a discriminant function f (x), which maps x directly to a class label • Discriminative and probabilistic • For each class k, learn the probability model • Use this probability to classify a new input x

  6. Three approaches to pattern classification • Generative • For each class k, learn the generative probability model • Also, learn the class probabilities • Use Bayes’ rule to classify a new input: where

  7. 3 f(x) 2 1 0 Three approaches to pattern classification • Discriminative and non-probabilistic • Learn a discriminant function f (x), which maps x directly to a class label • Discriminative and probabilistic • For each class k, learn the probability model • Use this probability to classify a new input x

  8. 3 f(x) 2 1 0 Can we use regression to learn discriminant functions?

  9. 3 3 f(x) f(x) 2 2 1 1 0 0 Can we use regression to learn discriminant functions? • What do the classification regions look like? • Is there any sense in which square error is an appropriate cost function for classification? • We should be careful to not interpret integer-valued class labels as ordinal targets for regression

  10. The one-of-K representation • For > 2 classes, each class is represented by a binary vector t with 1 indicating the class: • K regression problems: • To classify x, pick class k with largest yk(x) 1 0 0 . . . 0 0 1 0 . . . 0 t = t = . . .

  11. Let’s focus on binary classification • Predict target t {0,1} from input vector x • Denote the mth input of training case n by xnm

  12. Classification boundary for linear regression • Values of x where y(x)=0.5 are ambiguous – these form the classification boundary • For these points, = 0.5 • If x is (M+1)-dimensional, this defines an M-dimensional hyperplane separating the classes

  13. How well does linear regression work? Works well in some cases, but there are two problems: • Due to linearity, “extreme” x’s cause extreme y(x,w)’s • Due to squared error, extreme y(x,w)’s dominate learning Linear regression Logistic regression (more later)

  14. Clipping off extreme y’s • To clip off extremes of , we can use a sigmoid function: where • Now, squared error won’t penalize extreme x’s • y is now a non-linear function of w, so learning is harder

  15. How the observed error propagates back to the parameters yn E(w)= ½Sn( tn– s (Smwmxnm) )2 • The rate of change of E w.r.t. wm is E(w)/wm= -Sn( tn- yn )yn (1- yn) xnm • Useful fact: s’(a) = s (a) (1 - s (a)), • Compare with linear regression: E(w)/wm= -Sn( tn- yn )xnm

  16. The effect of clipping • Regression with sigmoid: E(w)/wm= -Sn( tn- yn )yn (1- yn) xnm • Linear regression: E(w)/wm= -Sn( tn- yn )xnm For these outliers, both (tn-yn) 0 and y(1-y)  0, so the outliers won’t hold back improvement of the boundary

  17. dE/dy=1 dE/dy=0 Squared error for learning probabilities • If t = 0 and y  1, y is moderately pulled down (grad 1) • If t = 0 and y  0, y is weakly pulled down (grad 0) E E=½(t-y)2 t=0 • Problems: • Being certainly wrong • is often undesirable • Often, tiny differences • between small • probabilities count a • lot y

  18. 3 f(x) 2 1 0 Three approaches to pattern classification • Discriminative and non-probabilistic • Learn a discriminant function f (x), which maps x directly to a class label • Discriminative and probabilistic • For each class k, learn the probability model • Use this probability to classify a new input x

  19. Logistic regression: Binary likelihood • As before, we use: where • Now, use binary likelihood: p(t|x) = y(x)t(1-y(x))1-t • Data log-likelihood: L= Sn tn lns (Smwmxnm) + (1-tn) ln(1-s (Smwmxnm)) • Unlike linear regression,Lis nonlinear in the w’s, so gradient-based optimizers are needed

  20. dE/dy∞ dE/dy=1 Binary likelihood for learning probabilities • If t = 0 and y  1, y is strongly pulled down (grad ∞) • If t = 0 and y  0, y is moderately pulled down (grad 1) E E=-ln(1-y) t=0 dE/dy=1 E=½(t-y)2 t=0 dE/dy=0 y

  21. How the observed error propagates back to the parameters yn L= Sn tn lns (Smwmxnm) + (1-tn) ln(1-s (Smwmxnm)) • The rate of change of L w.r.t. wm is L/wm= Sn( tn- yn ) xnm • Compare with sigmoid plus squared error: E(w)/wm= -Sn( tn- yn )yn (1- yn) xnm • Compare with linear regression: E(w)/wm= -Sn( tn- yn )xnm

  22. How well does logistic regression work? Linear regression Logistic regression

  23. Multiclass logistic regression • Create one set of weights per class and define • The K-class generalization of the sigmoid function is p(t|x) = Sk exp(tkyk(x)) /Sk exp(yk(x)) which is equivalent to p( |x) = exp(yk(x)) /Sj exp(yj(x)) • Learning: Similar to logistic regression (see textbook)

  24. Three approaches to pattern classification • Generative • For each class k, learn the generative probability model • Also, learn the class probabilities • Use Bayes’ rule to classify a new input: where

  25. Isotropic Gaussian: sk12 = sk22 2sk1 2sk2 mk2 mk1 Gaussian generative models • We can assume each element of x is independent and Gaussian, given the class: p(x| ) = Pmp(xm| ) = PmN(xm | km,km2) • Contour plot of density:

  26. Learning a Buffalo digit classifier(5000 training cases) • The generative ML estimates of km and km2 are just the data means and variances: Means: Variances (black=low variance, white=high variance): • The classes are equally frequent, so = 1/10 • To classify a new input x, compute (in the log-domain!)

  27. A problem with the ML estimate • Some pixels were constant across all training images within a class, soML2 = 0 • This causes numerical problems when evaluating Gaussian densities, but is also an overfitting problem • Common hack: Add min2 to all variances • More principled approaches • Regularize 2 • Place a prior on 2and use MAP • Place a prior on 2 and use Bayesian learning

  28. Test error rate Training error rate log10min2 Classifying test data(5000 test cases) • Adding min2 = 0.01 to the variances, we obtain: • Error rate on training set = 16.00% (std dev .5%) • Error rate on test set = 16.72% (std dev .5%) • Effect of value of min2 on error rates:

  29. Full-covariance Gaussian models • Let x = Ly, where y is isotropic Gaussian and L is an M x M rotation and scale matrix • This generates a full-covariance Gaussian: • Defining S = (L-1TL-1)-1, we obtain where S is the covariance matrix: Sjk = COV(xj, xk) Determinant

  30. Generative models easily induce non-linear decision boundaries • The following three-class problem shows how three axis-aligned Gaussians can induce nonlinear decision boundaries

  31. Generative models easily induce non-linear decision boundaries • Two Gaussians can be used to account for “inliers” and “outliers”

  32. How are we doing on the pass sequence? • Bayesian regression and estimation enables us to track the man in the striped shirt based on labeled data • Can we track the man in the white shirt? Not very well. Regression fails to identify that there really are two classes of solution Hand-labeled horizontal coordinate, t Feature, x

  33. Position of man in striped shirt Feature Using classification to improve tracking The position of the man in the striped shirt can be used to classify the tracking mode of the man in the white shirt Position of man in white shirt Feature

  34. p(|ts) p(ts|xs) p(|ts) p(tw|xw, ) p(tw|xw, ) Using classification to improve tracking • xs and ts = feature and position of man in striped shirt • xw and tw = feature and position of man in white shirt • For man in white shirt, hand-label regions and and learn two trackers Position of man in white shirt, tw Position of man in striped shirt, ts Feature, xw Feature, xs

  35. p(ts| ) p(ts| ) Using classification to improve tracking • The classifier can be obtained using the generative approach, where each class-conditional likelihood is a Gaussian • Note: Linear classifiers won’t work p(|ts) Position of man in striped shirt, ts p(|ts) p(ts|xs) Feature, xs

  36. Questions?

  37. How are we doing on the pass sequence? • We can now track both men, provided with • Hand-labeled coordinates of both men in 30 frames • Hand-extracted features (stripe detector, white blob detector) • Hand-labeled classes for the white-shirt tracker • We have a framework for how to optimally make decisions and track the men

  38. This takes too much time to do by hand! How are we doing on the pass sequence? • We can now track both men, provided with • Hand-labeled coordinates of both men in 30 frames • Hand-extracted features (stripe detector, white blob detector) • Hand-labeled classes for the white-shirt tracker • We have a framework for how to optimally make decisions and track the men

  39. Lecture 4 Appendix

  40. Binary classification regions for linear regression • is defined by, and vice versa for • Values of xsatisfying are on the decision boundary, which is a D-1 dimensional hyperplane • w specifies the orientation of the decision hyperplane • -w0/||w|| specifies the distance from the hyperplane to the origin • The distance from input x to the hyperplane is y(x)/||w||

  41. K-ary classification regions for linear regression • x if • Each resulting classification region is contiguous and has linear boundaries:

  42. Fisher’s linear discriminant and least squares • Fisher: Viewingy = wTxas a projection, pick w to maximize the distance between the means of the data sets, while also minimizing the variances of the data sets • This result is also obtained using linear regression, by setting t = N/N1 for class 1 and t = - N/N2 for class 2, where Nk = # training cases in class k

  43. 1- In what sense is logistic regression linear? • The log-odds can be written thus: • Each input contributes linearly to the log-odds

  44. = = 2 1 2 1 where = Logistic regression classifiers Classifiers using equal-covariance Gaussian generative models Gaussian likelihoods and logistic regression • For two classes, if their covariance matrices are equal, S1=S2=S, we can write the log-odds as • So…

  45. “Linear models” or classifiers with “linear boundaries” Don’t be fooled • Such classifiers can be very hard to learn • Such classifiers may have boundaries that are highly nonlinear in x (eg, via basis functions) • All this means is that in some space the boundaries are linear

More Related