1 / 21

Linear Basis Function Models: Understanding & Application

Learn about linear regression models, basis functions, maximum likelihood, and least squares methods in regression with comprehensive insights.

scombs
Download Presentation

Linear Basis Function Models: Understanding & Application

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ch 3. Linear Models for Regression (1/2)Pattern Recognition and Machine Learning, C. M. Bishop, 2006. Summarized by Yung-Kyun Noh Biointelligence Laboratory, Seoul National University http://bi.snu.ac.kr/

  2. Contents • 3.1 Linear Basis Function Models • 3.1.1 Maximum likelihood and least squares • 3.1.2 Geometry of least squares • 3.1.3 Sequential learning • 3.1.4 Regularized least squares • 3.1.5 Multiple outputs • 3.2 The Bias-Variance Decomposition • 3.3 Bayesian Lear Regression • 3.3.1 Parameter distribution • 3.3.2 Predictive distribution • 3.3.3 Equivalent kernel (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  3. Linear Basis Function Models • Linear regression • Linear model • Linearity in the parameters • Using basis functions, allow nonlinear function of the input vector x. • Simplify the analysis of this class of models • Have some significant limitations • M: total number of parameters • : basis functions ( : dummy basis function) • , (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  4. Basis Functions • Polynomial functions: • Global functions of the input variable  spline functions • Gaussian basis functions: • Sigmoidal basis functions: • Logistic sigmoid functions: • Fourier basis  wavelets (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  5. Maximum Likelihood and Least Squares (1/2) • Assumption: Gaussian noise model • : zero mean Gaussian random variable with precision (inverse variance) .  • Result • Conditional mean = (unimodal) • For dataset • Likelihood: (Drop the explicit x) (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  6. Maximum Likelihood and Least Squares (2/2) • Maximization of the likelihood function under a conditional Gaussian noise distribution for a linear model is equivalent to minimizing a sum-of-squares error function. • Setting the gradient of log likelihood and setting it to zero to get where the NxM design matrix (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  7. Bias and Precision Parameter by ML • Some other solutions we can get by setting derivative to zero. • Bias maximizing log likelihood • The bias compensates for the difference between the averages (over the training set) of the target values and the weighted sum of the averages of the basis function values. • Noise precision parameter maximizing log likelihood (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  8. Geometry of Least Squares • If the number M of basis functions is smaller than the number N of data points, then the M vectors will span a linear subspace S of dimensionality M. • : jth column of • y: linear combination of • The least-squares solution for w corresponds to that choice of y that lies in subspace S and that is closest to t. (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  9. Sequential Learning • On-line learning • Technique of Stochastic gradient descent (or sequential gradient descent) • For the case of sum-of-squares error function (least-mean-square or the LMS algorithm) (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  10. Regularized Least Squares • Regularized least-square • Control over-fitting • Total error function • Closed form solution: • A more general regularizer (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  11. General Regularizer • In case q=1 in general regularizer • ‘lasso’ in the statistical literature • Sparse model: corresponding basis functions play no role. • Minimizing the unregularized sum-of-squares error s.t. the constraint Contours of the regularization term The lasso gives the sparse solution (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  12. Multiple Outputs • For K>1 target variables • 1. Introduce a different set of basis functions for each component of t. • 2. Use the same set of basis functions to model all of the components of the target vector. (W: MxK matrix of parameters) • For each variable tk, • : pseudo-inverse of (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  13. The Bias-Variance Decomposition (1/4) • Frequentist viewpoint of the model complexity issue: bias-variance trade-off. • Expected squared loss • Bayesian: the uncertainty in our model is expressed through a posterior distribution over w. • Frequentist: make a point estimate of w based on the data set D. Arises from the intrinsic noise on the data Dependent on the particular dataset D. (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  14. The Bias-Variance Decomposition (2/4) • Bias • The extent to which the average prediction over all data sets differs from the desired regression function. • Variance • The extent to which the solutions for individual data sets vary around their average. • The extent to which the function y(x;D) is sensitive to the particular choice of data set. • Expected loss = (bias)2 + variance + noise (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  15. The Bias-Variance Decomposition (3/4) •  bias-variance trade-off • Averaging many solutions for the complex model (M=25) is a beneficial procedure. • A weighted averaging (although with respect to the posterior distribution of parameters, not with respect to multiple data sets) of multiple solutions lies at the heart of Bayesian approach. (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  16. The Bias-Variance Decomposition (4/4) • The average prediction • Bias and variance • Bias-variance decomposition is based on averages with respect to ensembles of data sets (frequentist perspective). We would be better off combining them into a single large training set. (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  17. Bayesian Linear Regression (1/2) • Conjugate prior of likelihood • Posterior (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  18. Bayesian Linear Regression (2/2) • Consider prior • Corresponding posterior • Log of the posterior • Other forms of prior over parameters (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  19. Predictive Distribution (1/2) • Our real interests Uncertainty associated with the parameters w. 0 if N∞ Mean of the Gaussian predictive distribution (red line), and predictive uncertainty (shaded region) as the number of data increases. noise (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  20. Predictive Distribution (2/2) Draw samples from the posterior distribution over w. (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

  21. Equivalent Kernel • Mean of the predictive distribution at a point x. • Inner product of nonlinear functions Smoother matrix or equivalent kernel (C) 2006, SNU Biointelligence Lab, http://bi.snu.ac.kr/

More Related