1 / 51

EM and GMM

Explore K-means and hierarchical clustering algorithms, including optimization objectives, initialization tasks, and applications in data analysis. Understand missing data challenges and solutions in image processing.

ktobias
Download Presentation

EM and GMM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EM and GMM Jia-Bin Huang Virginia Tech ECE-5424G / CS-5824 Spring 2019

  2. Administrative • HW 3 due March 27. • Final project discussion: Link • Final exam date/time • Exam Section: 14M • https://banweb.banner.vt.edu/ssb/prod/hzskexam.P_DispExamInfo • 2:05PM to 4:05PM May 13

  3. J. Mark Sowers Distinguished Lecture • Michael Jordan • Pehong Chen Distinguished ProfessorDepartment of Statistics and Electrical Engineering and Computer Sciences • University of California, Berkeley • 3/28/19 • 7:30 PM, McBryde 100

  4. K-means algorithm • Input: • (number of clusters) • Training set • (note: drop convention) Slide credit: Andrew Ng

  5. K-means algorithm • Randomly initialize cluster centroids Repeat{ for = 1 to index (from 1 to ) of cluster centroid closest to for = 1 to average (mean) of points assigned to cluster } Cluster assignment step Centroid update step Slide credit: Andrew Ng

  6. K-means optimization objective Example: • Index of cluster (1, 2, … K) to which example is currently assigned • cluster centroid () • cluster centroid of cluster to which example has been assigned • Optimization objective: Slide credit: Andrew Ng

  7. K-means algorithm Randomly initialize cluster centroids Repeat{ for = 1 to index (from 1 to ) of cluster centroid closest to for = 1 to average (mean) of points assigned to cluster } Cluster assignment step Centroid update step Slide credit: Andrew Ng

  8. Hierarchical Clustering • A hierarchy might be more nature • Different users might care about different levels of granularity or even prunings. Slide credit: Maria-Florina Balcan

  9. Hierarchical Clustering • Top-down (divisive) • Partition data into 2-groups (e.g., 2-means) • Recursively cluster each group • Bottom-up (agglomerative) • Start with every point in its own cluster. • Repeatedly merge the “closest” two clusters • Different definitions of “closest” give different algorithms. Slide credit: Maria-Florina Balcan

  10. Bottom-up (agglomerative) • Have a distance measure on pairs of objects. • Distance between and • Single linkage: • Complete linkage: • Average linkage: • Ward’s method Slide credit: Maria-Florina Balcan

  11. Bottom-up (agglomerative) • Single linkage: • At any time, distance between any two points in a connected components < r. • Complete linkage: • Keep max diameter as small as possible at any level • Ward’s method • Merge the two clusters such that the increase in k-means cost is as small as possible. • Works well in practice Slide credit: Maria-Florina Balcan

  12. Things to remember • Intro to unsupervised learning • K-means algorithm • Optimization objective • Initialization and the number of clusters • Hierarchical clustering

  13. Today’s Class • Examples of Missing Data Problems • Detecting outliers • Latent topic models • Segmentation • Background • Maximum Likelihood Estimation • Probabilistic Inference • Dealing with “Hidden” Variables • EM algorithm, Mixture of Gaussians • Hard EM

  14. Today’s Class • Examples of Missing Data Problems • Detecting outliers • Latent topic models • Segmentation • Background • Maximum Likelihood Estimation • Probabilistic Inference • Dealing with “Hidden” Variables • EM algorithm, Mixture of Gaussians • Hard EM

  15. Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Annotator Ratings 10 8 9 2 8 Photo: Jam343 (Flickr)

  16. Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of “visual words”. Challenge: Discover frequently occurring object categories, without pre-trained appearance models. http://www.robots.ox.ac.uk/~vgg/publications/papers/russell06.pdf

  17. Missing Data Problems: Segmentation You are given an image and want to assign foreground/background pixels. Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Foreground Background

  18. Missing Data Problems: Segmentation Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Three steps: • If we had labels, how could we model the appearance of foreground and background? • Maximum Likelihood Estimation • Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? • Probabilistic Inference • How can we get both labels and appearance models at once? • Expectation-Maximization (EM) Algorithm

  19. Maximum Likelihood Estimation • If we had labels, how could we model the appearance of foreground and background? Background Foreground

  20. Maximum Likelihood Estimation data parameters

  21. Maximum Likelihood Estimation Gaussian Distribution

  22. Maximum Likelihood Estimation Gaussian Distribution Log-Likelihood

  23. Maximum Likelihood Estimation Gaussian Distribution

  24. Example: MLE Parameters used to Generate fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 >> mu_fg = mean(im(labels)) mu_fg = 0.6012 >> sigma_fg = sqrt(mean((im(labels)-mu_fg).^2)) sigma_fg = 0.1007 >> mu_bg = mean(im(~labels)) mu_bg = 0.4007 >> sigma_bg = sqrt(mean((im(~labels)-mu_bg).^2)) sigma_bg = 0.1007 >> pfg = mean(labels(:)); im labels

  25. Probabilistic Inference 2. Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? Background Foreground

  26. Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label

  27. Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Conditional probability

  28. Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Marginalization

  29. Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Joint distribution

  30. Example: Inference >> pfg = 0.5; >> px_fg = normpdf(im, mu_fg, sigma_fg); >> px_bg = normpdf(im, mu_bg, sigma_bg); >> pfg_x = px_fg*pfg ./ (px_fg*pfg + px_bg*(1-pfg)); Learned Parameters fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 im p(fg | im)

  31. Dealing with Hidden Variables 3. How can we get both labels and appearance parameters at once? Background Foreground

  32. Mixture of Gaussians mixture component component model parameters component prior

  33. Mixture of Gaussians With enough components, can represent any probability density function • Widely used as general purpose pdf estimator

  34. Segmentation with Mixture of Gaussians Pixels come from one of several Gaussian components • We don’t know which pixels come from which components • We don’t know the parameters for the components Problem: • Estimate the parameters of the Gaussian Mixture Model. What would you do?

  35. Simple solution • Initialize parameters • Compute the probability of each hidden variable given the current parameters • Compute new parameters for each model, weighted by likelihood of hidden variables • Repeat 2-3 until convergence

  36. Mixture of Gaussians: Simple Solution • Initialize parameters • Compute likelihood of hidden variables for current parameters • Estimate new parameters for each model, weighted by likelihood

  37. Expectation Maximization (EM) Algorithm Goal: Log of sums is intractable for concave functions f(x) (so we maximize the lower bound!) Jensen’s Inequality See here for proof: www.stanford.edu/class/cs229/notes/cs229-notes8.ps

  38. Expectation Maximization (EM) Algorithm Goal: • E-step: compute • M-step: solve

  39. log of expectation of P(x|z) Goal: • E-step: compute • M-step: solve expectation of log of P(x|z)

  40. EM for Mixture of Gaussians - derivation • E-step: • M-step:

  41. EM for Mixture of Gaussians • E-step: • M-step:

  42. EM algorithm - derivation http://lasa.epfl.ch/teaching/lectures/ML_Phd/Notes/GP-GMM.pdf

  43. EM algorithm – E-Step

  44. EM algorithm – E-Step

  45. EM algorithm – M-Step

  46. EM algorithm – M-Step Take derivative with respect to

  47. EM algorithm – M-Step Take derivative with respect to

  48. EM Algorithm for GMM

  49. EM Algorithm • Maximizes a lower bound on the data likelihood at each iteration • Each step increases the data likelihood • Converges to local maximum • Common tricks to derivation • Find terms that sum or integrate to 1 • Lagrange multiplier to deal with constraints

  50. Convergence of EM Algorithm

More Related