160 likes | 288 Views
Welcome. Seventh Lecture for MATH 3330 M Professor G.E. Denzel. Agenda. Begin discussion of multi-predictor models. Learning Objectives. How to find slope and intercept to minimize the error sum of squares (the ‘least squared error’ estimators). Properties of these estimators
E N D
Welcome Seventh Lecture for MATH 3330 M Professor G.E. Denzel
Agenda • Begin discussion of multi-predictor models
Learning Objectives • How to find slope and intercept to minimize the error sum of squares (the ‘least squared error’ estimators). • Properties of these estimators • Linear functions of the data • Caution on conclusions possible without further knowledge of population
Sums of Squares • SSE and SSTOT (= SSY) play same role as for one-predictor models • SSTOT=SSE + (SSTOT-SSE) = SSE + SSR • SSE now has N-k-1 degrees of freedom (df), where k=number of predictors in the model. • SSR now has k df • F*=MSR/MSE=(SSR/k)/MSE will again have an F distribution under the H0: all predictors have coefficient 0, with df =k for numerator and N-k-1 for denominator. • We reject H0 for large values of F*. • The alternative hypothesis is that AT LEAST ONE OF THE COEFFICIENTS IS NON-ZERO
Example using Anscombe data • We will step through the process of fitting the model after the data has been put into a SAS workspace. • First here is a part of the data, along with a description of the variables. Note that this data does not really represent a random sample of 51 observations, except perhaps in the sense of one year’s data sampled from many years. However, we can still fit models as long as we think about what the hypothetical population which we are making inferences about might be.
Here is the input menu after selecting spend as the ‘Y’ and income, prop18, and propurban as predictors.:
Next slide shows what the dataset now looks like with the residual and predicted variables added to the data (using default names; they can be changed).