1 / 48

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING. CHAPTER 3: LINEAR MODELS FOR REGRESSION. Outline. Discuss tutorial. Regression Examples. The Gaussian distribution. Linear Regression. Maximum Likelihood estimation. Polynomial Curve Fitting . Academia Example.

kris
Download Presentation

PATTERN RECOGNITION AND MACHINE LEARNING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION

  2. Outline Discuss tutorial. Regression Examples. The Gaussian distribution. Linear Regression. Maximum Likelihood estimation.

  3. Polynomial Curve Fitting

  4. Academia Example Predict: final percentage mark for student. Features: 6 assignment grades, midterm exam, final exam, project, age. Questions we could ask. I forgot the weights of components. Can you recover them from a spreadsheet of the final grades? I lost the final exam grades. How well can I still predict the final mark? How important is each component, actually? Could I guess well someone’s final mark given their assignments? Given their exams?

  5. The Gaussian Distribution

  6. Central Limit Theorem The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows. Example: N uniform [0,1] random variables.

  7. Reading exponential prob formulas In infinite space, cannot just form sumΣx p(x)  grows to infinity. Instead, use exponential, e.g.p(n) = (1/2)n Suppose there is a relevant feature f(x) and I want to express that “the greater f(x) is, the less probable x is”. Use p(x) = exp(-f(x)).

  8. Example: exponential form sample size Fair Coin: The longer the sample size, the less likely it is. p(n) = 2-n. ln[p(n)] Sample size n

  9. Exponential Form: Gaussian mean The further x is from the mean, the less likely it is. ln[p(x)] 2(x-μ)

  10. Smaller variance decreases probability The smaller the variance σ2, the less likely x is (away from the mean). Or: the greater the precision, the less likely x is. ln[p(x)] 1/σ2 = β

  11. Minimal energy = max probability The greater the energy (of the joint state), the less probable the state is. ln[p(x)] E(x)

  12. Linear Basis Function Models (1) Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd.

  13. Linear Basis Function Models (2) Polynomial basis functions: These are global; a small change in x affect all basis functions.

  14. Linear Basis Function Models (3) Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width). Related to kernel methods.

  15. Linear Basis Function Models (4) Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope).

  16. Curve Fitting With Noise

  17. Maximum Likelihood and Least Squares (1) Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where

  18. Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error.

  19. Maximum Likelihood and Least Squares (3) Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse, .

  20. Linear Algebra/Geometry of Least Squares Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional

  21. Maximum Likelihood and Least Squares (4) Maximizing with respect to the bias, w0, alone, we see that We can also maximize with respect to ¯, giving

  22. 0th Order Polynomial

  23. 3rd Order Polynomial

  24. 9th Order Polynomial

  25. Over-fitting Root-Mean-Square (RMS) Error:

  26. Polynomial Coefficients

  27. Data Set Size: 9th Order Polynomial

  28. 1st Order Polynomial

  29. Data Set Size: 9th Order Polynomial

  30. Quadratic Regularization Penalize large coefficient values

  31. Regularization:

  32. Regularization:

  33. Regularization: vs.

  34. Regularized Least Squares (1) Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient.

  35. Regularized Least Squares (2) With a more general regularizer, we have Lasso Quadratic

  36. Regularized Least Squares (3) Lasso tends to generate sparser solutions than a quadratic regularizer.

  37. Cross-Validation for Regularization

  38. Bayesian Linear Regression (1) Define a conjugate shrinkage prior over weight vector w: p(w|α) = N(w|0,α-1I) Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives a posterior distribution. Log of the posterior = sum of squared errors + quadratic regularization.

  39. Bayesian Linear Regression (3) 0 data points observed Data Space Prior

  40. Bayesian Linear Regression (4) 1 data point observed Data Space Likelihood Posterior

  41. Bayesian Linear Regression (5) 2 data points observed Data Space Likelihood Posterior

  42. Bayesian Linear Regression (6) 20 data points observed Data Space Likelihood Posterior

  43. Predictive Distribution (1) Predict t for new values of x by integrating over w. Can be solved analytically.

  44. Predictive Distribution (2) Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point

  45. Predictive Distribution (3) Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points

  46. Predictive Distribution (4) Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points

  47. Predictive Distribution (5) Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points

  48. Limitations of Fixed Basis Functions M basis function along each dimension of a D-dimensional input space requires MD basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.

More Related