510 likes | 648 Views
Stats 760: Lecture 2. Linear Models. Agenda. R formulation Matrix formulation Least squares fit Numerical details – QR decomposition R parameterisations Treatment Sum Helmert. R formulation. Regression model y ~ x1 + x2 + x3 Anova model y ~ A+ B (A, B factors)
E N D
Stats 760: Lecture 2 Linear Models
Agenda • R formulation • Matrix formulation • Least squares fit • Numerical details – QR decomposition • R parameterisations • Treatment • Sum • Helmert
R formulation • Regression model y ~ x1 + x2 + x3 • Anova model y ~ A+ B (A, B factors) • Model with both factors and continuous variables y ~ A*B*x1 + A*B*x2 What do these mean? How do we interpret the output?
Regression model Mean of observation = b0 + b1x1 + b2x2 + b3x3 Estimate b’s by least squares ie minimize
Matrix formulation Arrange data into a matrix and vector Then minimise
Normal equations • Minimising b’s satisfy Non-negative, zero when b=beta hat Proof:
Solving the equations • We could calculate the matrix XTX directly, but this is not very accurate (subject to roundoff errors). For example, when trying to fit polynomials, this method breaks down for polynomials of high degree • Better to use the “QR decomposition” which avoids calculating XTX
Solving the normal equations • Use “QR decomposition” X=QR • X is n x p and must have “full rank” (no column a linear combination of other columns) • Q is n x p “orthogonal” (i.e. QTQ = identity matrix) • R is p x p “upper triangular” (all elements below the diagonal zero), all diagonal elements positive, so inverse exists
Solving using QR XTX = RTQTQR = RTR XTy = RTQTy Normal equations reduce to RTRb = RTQTy Premultiply by inverse of RT to get Rb = QTy Triangular system, easy to solve
A refinement • We need QTy: • Solution: do QR decomp of [X,y] • Thus, solve Rb = r
What R has to do When you run lm, R forms the matrix X from the model formula, then fits the model E(Y)=Xb Steps: • Extract X and Y from the data and the model formula • Do the QR decomposition • Solve the equations Rb = r • Solutions are the numbers reported in the summary
Forming X When all variables are continuous, it’s a no-brainer • Start with a column of 1’s • Add columns corresponding to the independent variables It’s a bit harder for factors
Factors: one way anova Consider model y ~ a where a is a factor having 3 levels say. In this case, we • Start with a column of ones • Add a dummy variable for each level of the factor (3 in all), order is order of factor levels Problem: matrix has 4 columns, but first is sum of last 3, so not linearly independent Solution: Reparametrize!
Reparametrizing • Let Xa be the last 3 columns (the 3 dummy variables) • Replace Xa by XaC (ie Xa multiplied by C), where C is a 3 x 2 “contrast matrix” with the properties • Columns of XaC are linearly independent • Columns of XaC are linearly independent of the column on 1’s In general, if a has k levels, C will be k x (k-1)
The “treatment” parametrization • Here C is the matrix C = 0 0 1 0 0 1 (You can see the matrix in the general case by typing contr.treatment(k) in R, where k is the number of levels) This is the default in R
Treatment parametrization (2) • The model is E[Y] = Xb, where X is 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 • The effect of the reparametrization is to drop the first column of Xa, leaving the others unchanged. . . . Observations at level 1 Observations at level 2 . . . Observations at level 3 . . . . . .
Treatment parametrization (3) • Mean response at level 1 is b0 • Mean response at level 2 is b0 + b1 • Mean response at level 3 is b0 + b2 • Thus, b0 is interpreted as the baseline (level 1) mean • The parameter b1 is interpreted as the offset for level 2 (difference between levels 1 and 2) • The parameter b2 is interpreted as the offset for level 3 (difference between levels 1 and 3) . . .
The “sum” parametrization • Here C is the matrix C = 1 0 0 1 -1 -1 (You can see the matrix in the general case by typing contr.sum(k) in R, where k is the number of levels) To get this in R, you need to use the options function options(contrasts=c("contr.sum", "contr.poly"))
sum parametrization (2) • The model is E[Y] = Xb, where X is 1 1 0 1 1 0 1 0 1 1 0 1 1 -1 -1 1 -1 -1 • The effect of this reparametrization is to drop the last column of Xa, and change the rows corresponding to the last level of a. . . . Observations at level 1 Observations at level 2 . . . Observations at level 3 . . . . . .
Sum parameterization (3) • Mean response at level 1 is b0 + b1 • Mean response at level 2 is b0 + b2 • Mean response at level 3 is b0 - b1 -b2 • Thus, b0 is interpreted as the average of the 3 means, the “overall mean” • The parameter b1 is interpreted as the offset for level 1 (difference between level 1 and the overall mean) • The parameter b2 is interpreted as the offset for level 2 (difference between level 1 and the overall mean) • The offset for level 3 is - b1 -b2 . . .
The “Helmert” parametrization • Here C is the matrix C = -1 -1 1 -1 0 2 (You can see the matrix in the general case by typing contr.helmert(k) in R, where k is the number of levels)
Helmert parametrization (2) • The model is E[Y] = Xb, where X is 1 -1 -1 1 -1 -1 1 1 -1 1 1 -1 1 0 2 1 0 2 • The effect of this reparametrization is to change all the rows and columns. . . . Observations at level 1 Observations at level 2 . . . Observations at level 3 . . . . . .
Helmert parametrization (3) • Mean response at level 1 is b0 - b1 -b2 • Mean response at level 2 is b0 + b1 -b2 • Mean response at level 3 is b0 + 2b2 • Thus, b0 is interpreted as the average of the 3 means, the “overall mean” • The parameter b1 is interpreted as half the difference between level 2 mean and level 1 mean • The parameter b2 is interpreted as the one third of the difference between the level 3 mean and the average of the level 1 and 2 means . . .
Using R to calculate the relationship between b-parameters and means Thus, the matrix (XTX)-1XT gives the coefficients we need to find the b’s from the m’s
Example: One way model • In an experiment to study the effect of carcinogenic substances, six different substances were applied to cell cultures. • The response variable (ratio) is the ratio of damages to undamaged cells, and the explanatory variable (treatment) is the substance
Data ratio treatment 0.08 control + 49 other control obs 0.08 choralhydrate + 49 other choralhydrate obs 0.10 diazapan + 49 other diazapan obs 0.10 hydroquinone + 49 other hydroquinine obs 0.07 econidazole + 49 other econidazole obs 0.17 colchicine + 49 other colchicine obs
lm output > cancer.lm=lm(ratio~treatment, data=carcin.df) > summary(cancer.lm) Coefficients: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.23660 0.02037 11.616 < 2e-16 *** treatmentchloralhydrate 0.03240 0.02880 1.125 0.26158 treatmentcolchicine 0.21160 0.02880 7.346 2.02e-12 *** treatmentdiazapan 0.04420 0.02880 1.534 0.12599 treatmenteconidazole 0.02820 0.02880 0.979 0.32838 treatmenthydroquinone 0.07540 0.02880 2.618 0.00931 ** --- Residual standard error: 0.144 on 294 degrees of freedom Multiple R-squared: 0.1903, Adjusted R-squared: 0.1766 F-statistic: 13.82 on 5 and 294 DF, p-value: 3.897e-12
Relationship between means and betas > levels(carcin.df$treatment) [1] "control" "chloralhydrate" "colchicine" "diazapan" "econidazole" "hydroquinone" cancer.lm=lm(ratio~treatment, data=carcin.df) X<-model.matrix(cancer.lm)[c(1,51,101,151,201,251),] coef.mat<-solve(t(X)%*%X)%*%t(X) round(coef.mat) 1 51 101 151 201 251 (Intercept) 1 0 0 0 0 0 treatmentchloralhydrate -1 1 0 0 0 0 treatmentcolchicine -1 0 0 0 0 1 treatmentdiazapan -1 0 1 0 0 0 treatmenteconidazole -1 0 0 0 1 0 treatmenthydroquinone -1 0 0 1 0 0 carcin.df[c(1,51,101,151,201,251),] ratio treatment 1 0.08 colchicine 51 0.08 control 101 0.10 diazapan 151 0.10 hydroquinone 201 0.07 econidazole 251 0.17 chloralhydrate
Two factors: model y ~ a + b To form X: • Start with column of 1’s • Add XaCa • Add XbCb
Two factors: model y ~ a * b To form X: • Start with column of 1’s • Add XaCa • Add XbCb • Add XaCa: XbCb (Every column of XaCa multiplied elementwise with every column of XbCb)
Two factors: example Experiment to study weight gain in rats • Response is weight gain over a fixed time period • This is modelled as a function of diet (Beef, Cereal, Pork) and amount of feed (High, Low) • See coursebook Section 4.4
Data > diets.df gain source level 1 73 Beef High 2 98 Cereal High 3 94 Pork High 4 90 Beef Low 5 107 Cereal Low 6 49 Pork Low 7 102 Beef High 8 74 Cereal High 9 79 Pork High 10 76 Beef Low . . . 60 observations in all
Two factors: the model • If the (continuous) response depends on two categorical explanatory variables, then we assume that the response is normally distributed with a mean depending on the combination of factor levels: if the factors are A and B, the mean at the i th level of A and the j th level of B is mij • Other standard assumptions (equal variance, normality, independence) apply
Decomposition of the means • We usually want to split each “cell mean” up into 4 terms: • A term reflecting the overall baseline level of the response • A term reflecting the effect of factor A (row effect) • A term reflecting the effect of factor B (column effect) • A term reflecting how A and B interact.
Mathematically… Overall Baseline: m11 (mean when both factors are at their baseline levels) Effect of i th level of factor A (row effect): mi1 - m11(The i th level of A, at the baseline of B, expressed as a deviation from the overall baseline) Effect of j th level of factor B (column effect) : m1j - m11 (The j th level of B, at the baseline of A, expressed as a deviation from the overall baseline) Interaction: what’s left over (see next slide)
Interactions • Each cell (except the first row and column) has an interaction: Interaction = cell mean - baseline - row effect - column effect • If the interactions are all zero, then the effect of changing levels of A is the same for all levels of B • In mathematical terms, mij – mi’j doesn’t depend on j • Equivalently, effect of changing levels of B is the same for all levels of A • If interactions are zero, relationship between factors and response is simple
Fit model > rats.lm<-lm(gain~source+level + source:level) > summary(rats.lm) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.000e+02 4.632e+00 21.589 < 2e-16 *** sourceCereal -1.410e+01 6.551e+00 -2.152 0.03585 * sourcePork -5.000e-01 6.551e+00 -0.076 0.93944 levelLow -2.080e+01 6.551e+00 -3.175 0.00247 ** sourceCereal:levelLow 1.880e+01 9.264e+00 2.029 0.04736 * sourcePork:levelLow -3.052e-14 9.264e+00 -3.29e-15 1.00000 --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 14.65 on 54 degrees of freedom Multiple R-Squared: 0.2848, Adjusted R-squared: 0.2185 F-statistic: 4.3 on 5 and 54 DF, p-value: 0.002299
Fitting as a regression model Note that when using the treatment contrasts, this is equivalent to fitting a regression with dummy variables R2, C2, C3 R2 = 1 if obs is in row 2, zero otherwise C2 = 1 if obs is in column 2, zero otherwise C3 = 1 if obs is in column 3, zero otherwise The regression is Y ~ R2 + C2 + C3 + I(R2*C2) + I(R2*C3)
Using R to interpret parameters >rats.lm<-lm(gain~source*level, data=diets.df) >X<-model.matrix(rats.lm)[1:6,] >coef.mat<-solve(t(X)%*%X)%*%t(X) >round(coef.mat) 1 2 3 4 5 6 (Intercept) 1 0 0 0 0 0 sourceCereal -1 1 0 0 0 0 sourcePork -1 0 1 0 0 0 levelLow -1 0 0 1 0 0 sourceCereal:levelLow 1 -1 0 -1 1 0 sourcePork:levelLow 1 0 -1 -1 0 1 >diets.df[1:6,] gain source level 1 73 Beef High 2 98 Cereal High 3 94 Pork High 4 90 Beef Low 5 107 Cereal Low 6 49 Pork Low Cell Means betas
X matrix: details (first six rows) (Intercept) source source level sourceCereal: sourcePork: Cereal Pork Low levelLow levelLow 1 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 Col of 1’s XaCa XbCb XaCa:XbCb
Two factors: one continuous, one a factor • Lathe example (330 Lecture 17) • Consider an experiment to measure the rate of metal removal in a machining process on a lathe. • The rate depends on the speed setting of the lathe (fast, medium or slow, a categorical measurement) and the hardness of the material being machined (a continuous measurement)
330 lecture 17 Data • hardness setting rate • 1 175 fast 138 • 2 132 fast 102 • 3 124 fast 93 • 141 fast 112 • 130 fast 100 • 6 165 medium 122 • 7 140 medium 104 • 8 120 medium 75 • 9 125 medium 84 • 10 133 medium 95 • 11 120 slow 68 • 12 140 slow 90 • 13 150 slow 98 • 14 125 slow 77 • 15 136 slow 88
330 lecture 17 Non-parallel lines • Model is ( one regression per setting)
330 lecture 17 Dummy variables for both parameters We can combine these 3 equations into one by using “dummy variables”. Define med = 1 if setting = medium, 0 otherwise slow = 1 if setting =slow, 0 otherwise h.med = hardness x med h.slow = hardness x slow Then we can write the model as
330 lecture 17 Fitting in R The model formula for this non-parallel model is rate ~ setting + hardness + setting:hardness or, even more compactly, as rate ~ setting * hardness > summary(lm(rate ~ setting*hardness)) Estimate Std. Error t value Pr(>|t|) (Intercept) -12.18162 10.32795 -1.179 0.2684 settingmedium -30.15725 15.49375 -1.946 0.0834 . settingslow -33.60120 19.58902 -1.715 0.1204 hardness 0.86312 0.07295 11.831 8.69e-07 *** settingmedium:hardness 0.14961 0.11125 1.345 0.2116 settingslow:hardness 0.10546 0.14356 0.735 0.4813
X-matrix (Intercept) setting setting hardness settingmedium: settingslow: medium slow hardness hardness 1 1 0 0 175 0 0 1 1 0 0 132 0 0 1 1 0 0 124 0 0 1 1 0 0 141 0 0 5 1 0 0 130 0 0 6 1 1 0 165 165 0 7 1 1 0 140 140 0 8 1 1 0 120 120 0 9 1 1 0 125 125 0 10 1 1 0 133 133 0 11 1 0 1 120 0 120 12 1 0 1 140 0 140 13 1 0 1 150 0 150 14 1 0 1 125 0 125 15 1 0 1 136 0 136 F A S T M E D I U M S L O W