100 likes | 116 Views
CIS 2033 based on Dekking et al. A Modern Introduction to Probability and Statistics. 2007 Slides by Kier Heilman Instructor Longin Jan Latecki. C22: The Method of Least Squares. 22.1 – Least Squares.
E N D
CIS 2033 based onDekking et al. A Modern Introduction to Probability and Statistics. 2007Slides by Kier Heilman Instructor Longin Jan Latecki C22: The Method of Least Squares
22.1 – Least Squares Consider the random variables: Yi = α + βxi + Ui for i = 1, 2, . . ., n. where random variables U1, U2, …, Un have zero expectation and variance σ 2 Method of Least Squares: Choose a value for α and β such that S(α,β)=( ) is minimal.
22.1 – Regression The observed value yicorresponding to xiand the value α+βxion the regression line y = α + βx.
22.1– Estimation • After some calculus magic, we have the following two simultaneously equations to estimate α and β:
22.1– Estimation • After some simple algebraic rearranging, we put the equations in terms of α and β: (slope) (intercept)
22.1– Least Square Estimators are Unbiased • All estimators for α and β are unbiased. • For the simple linear regression model, the random variable is an unbiased estimator for δ2.
22.2– Residuals • Residual: The vertical distance between the ith point and the estimated regression line: The sum of the residuals is zero.
22.2– Heteroscedasticity • Homoscedasticity: The assumption of equal variance of the Ui (and therefore Yi). For instance, heteroscedasticity occurs when Yi with a large expected value have a larger variance than those with small expected values.
22.3– Relation with Maximum Likelihood • What are the maximum likelihood estimates for αand β? • To apply the method of least squares no assumption is needed about the type of distribution of the Ui. In case the type of distribution of the Ui is known, the maximum likelihood principle can be applied. Consider, for instance, the classical situation where the Ui are independent with an N(0, σ2) distribution. Using the maximum likelihood estimation for a normal distribution:Yihas an N (α+ βxi, σ2) distribution, making the probability density function
22.3– Maximum Likelihood For fixed σ >0 the loglikelihood l (α, β, σ) obtains the maximum when is minimal. Hence, when random variables independent with a N(0,δ2) distribution, the maximum likelihood principle and the least squares method return the same estimators. The maximum likelihood estimator for σ 2 is: