1 / 5

Lecture 4

Lecture 4. Supervised Learning with observed random variables Linear Regression Logistic Regression Naive Bayes. Learning. Supervised  unsupervised continuous  discrete RVs --- regression  classification generative  discriminative with hidden variables  without hidden variables

csarah
Download Presentation

Lecture 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 4 Supervised Learning with observed random variables Linear Regression Logistic Regression Naive Bayes

  2. Learning Supervised  unsupervised continuous  discrete RVs --- regression  classification generative  discriminative with hidden variables  without hidden variables • This lecture is about supervised learning without hidden variables. We will look at both the generative and the discriminative approach. • Sometimes generative can inspire parameterizations discriminative. • plate notation

  3. Linear Regression • X  Y with Y continuous and X arbitrary. • discriminative approach: model p(Y|X) directly. • probability model: Gaussian with mean E[Y|X]=f(X) • Given data {Xn,Yn} what is the optimal setting of the parameters in the Maximum Likelihood framework • demo_LinReg • geometric interpretation

  4. Classification (discriminative) • X  Y with Y discrete [0,1,2,..D] and X arbitrary. • Discriminative approach, binary Y: Logistic Regression. • Fit (regress) a logistic function to data where E[Y|X] = logistic(X)  p(Y=1|X) = logistic(x). • Calculation of ML parameters. • demo_LogReg • softmax generalization for general discrete Y.

  5. Classification (generative) • Generative approach: model P(X,Y) = P(X|Y) P(Y) • Naive Bayes assumption x_i indep. x_j given Y. • case 1: X = continuous: use Gaussians for P(x_i|Y) • case 2: X = discrete: use multinomial distribution. • classification: max_Y logP(X|Y) + logP(Y) • ML parameters settings have very natural interpretation in terms of frequencies, clusters means etc.

More Related