690 likes | 1.07k Views
Simple Linear Regression. AMS 572 11/29/2010. Outline. Brief History and Motivation – Zhen Gong Simple Linear Regression Model – Wenxiang Liu Ordinary Least Squares Method – Ziyan Lou Goodness of Fit of LS Line – Yixing Feng OLS Example – Lingbin Jin
E N D
Simple Linear Regression AMS 572 11/29/2010
Outline • Brief History and Motivation – Zhen Gong • Simple Linear Regression Model – Wenxiang Liu • Ordinary Least Squares Method – Ziyan Lou • Goodness of Fit of LS Line – YixingFeng • OLS Example – Lingbin Jin • Statistical Inference on Parameters – Letan Lin • Statistical Inference Example – Emily Vo • Regression Diagnostics– Yang Liu • Correlation Analysis – Andrew Candela • Implementation in SAS – Joseph Chisari
Brief History and Introduction Legendre published the earliest form of regression, which was the method of least squares in 1805. The method was extended by Francis Galton in the 19th century to describe a biological phenomenon. Karl Pearson and Udny Yule extended it to a more general statistical context around 20th century. In 1809, Gauss published the same method.
Motivation for Regression Analysis ? Prediction forresponse variable • Regression analysis is a statistical methodology to estimate the relationship of a response variable to a set of predictor variable. • When there is just one predictor variable, we will use simple linear regression. When there are two or more predictor variables, we use multiple linear regression New observed predictor value Predict Y, based on X
Motivation for Regression Analysis 2010 Milan: Horsepower at 6000 rpm: 175 Highway gasoline consumption: 0.0326 gallon per mile 2010 Fusion: Horsepower at 6000 rpm: 263 Highway gasoline consumption: ? 2010 Camry: Horsepower at 6000 rpm: 169 Highway gasoline consumption: 0.03125 gallon per mile Responsevariable (Y): Highway gasoline consumption Predictorvariable (X): Horsepower at 6000 rpm
Simple Linear Regression Model • A summary of the relationship between a dependent variable (or response variable) Y and an independent variable (or covariate variable) X. • Y is assumed to be a random variable while, even if X is a random variable, we condition on it (assume it is fixed). Essentially, we are interested in knowing the behavior of Y given we know X = x.
Good Model • Regression models attempt to minimize the distance measured vertically between the observation point and the model line (or curve). • The length of the line segment is called residual, modeling error, or simply error. • The negative and positive errors should cancel out ⇒ Zero overall error Many lines will satisfy this criterion.
Probabilistic Model • In simple linear regression, the population regression line was given by E(Y) = β0+β1x • The actual values of Y are assumed to be the sum of the mean value, E(Y), and a random error term, ∊: Y = E(Y) + ∊ = β0+β1x + ∊ • At any given value of x, the dependent variable Y ~ N (β0+β1x , σ2)
Least Squares (LS) Fit Boiling Point of Water in the Alps
Least Squares (LS) Fit Find a line that represent the ”best” linear relationship:
Least Squares (LS) Fit • Problem: the data does not go through a line • Find the line that minimizes the sum: • We are looking for the line that minimizes
Least Squares (LS) Fit • To get the parameters that make the sum of square difference become minimum, take partial derivative for each parameter and equate it with zero.
Solve the equations and we get Least Squares (LS) Fit
To simplify, we introduce Least Squares (LS) Fit • The resulting equation is known as the least squares line, which is an estimate of the true regression line.
Goodness of Fit of the LS Line The fitted values is The residuals are used to evaluate the goodness of fit of the LS Line.
The error sum of squares SSE= The total sum of squares SST= The regression sum of squares SST=SSR+SSE Goodness of Fit of the LS Line
Goodness of Fit of the LS Line • The coefficient of determination is always between 0 and 1 • The sample correlation coefficient between X and Y is For the simple linear regression,
Estimation of the variance The variance measures the scatter of the around their means An unbiased estimate of is given by This estimate of has n-2 degrees of freedom.
Implementing OLS method to Problem 10.4 OLS method: The time between eruptions of Old Faithful geyser in Yellowstone National Park is random but is related to the duration of the last eruption. The table below shows these times for 21 consecutive eruptions.
Implementing OLS method to Problem 10.4 A scatter plot of Next vs. LAST
Implementing OLS method to Problem 10.4 When x=3, y=60 We could say that Last is a good predictor of Next
Statistical Inference Statistical Inference on and • Final Result • and are normally distributed. .
Statistical Inference on and • Derivation . Set ’s as fixed and use
Statistical Inference on and • Derivation .
Statistical Inference on and • Derivation .
Statistical Inference on and Since • Pivotal Quantities (P.Q.): . • Confidence Intervals (CI’s): .
Statistical Inference on and • Hypothesis tests: . Reject at level if • A useful application is to show whether there is a linear relationship between x and y Reject at level if • One-side alternative hypotheses can be tested using one-side t-test.
Analysis of Variance (ANOVA) • Mean Square: • A sum of squares divided by its degrees of freedom.
Analysis of Variance (ANOVA) • ANOVA Table:
Statistical Inference Example – Testing for Linear Relationship • Problem 10.4 At α = 0.05, is there a linear trend between the time to the NEXT eruption and the duration of the LAST eruption? vs. Reject H0 if where
Statistical Inference – Hypothesis Testing Solution: We reject H0 and therefore conclude That there is a linear relationship between NEXT and LAST.
Statistical Inference Example - Confidence and Prediction Intervals • Problem 10.11 from Tamane & Dunlop Statistics and Data Analysis 10.11 (a) Calculate a 95% PI for the time to the next eruption if the last eruption lasted 3 minutes.
Problem 10.11 – Prediction Interval Solution: The formula for a 100(1-α)% PI for a future observation is given by
Problem 10.11 - Confidence Interval 10.11(b) Calculate a 95% CI for the mean time to the next eruption for a last eruption lasting 3 minutes. Compare this confidence interval with the PI obtained in (a)
Problem 10.11 - Confidence Interval Solution: The formula for a 100(1-α)% CI for is given by where The 95% CI is [57.510, 63.257] The CI is shorter than the PI
Regression Diagnostics • Checking the Model Assumptions • is a linear function of • is the same for all • The errors are normally distributed • The errors are independent(for time series data) • Checking for Outliers and Influential Observations
Checking the Model Assumptions • Residuals: • can be viewed as the “estimates” of random errors
Checking for Linearity • If regression of on is linear, then the plot of vs. should exhibit random scatter around zero
y Checking for Linearity Tire Wear Data x
Checking for Linearity Tire Wear Data Residual x
Checking for Linearity • Data Transformation
Checking for Constant Variance • If the constant variance assumption is correct, the dispersion of the is approximately constant with respect to the
Checking for Constant Variance Example from textbook 10.21 Residual e
Checking for Normality • We can use residuals to make a normal plot Example from textbook 10.21 Normal plot of residuals
Checking for Outliers • Definition: An outlier is an observation that does not follow the general pattern of the relationship between and • A large residual indicates an outlier!!
Checking for Influential Observations • An observation can be influential because it has an extreme x-value, an y-value, or both • A large indicates an influential observation!! k: # of predictors