310 likes | 416 Views
In This Chapter We Will Cover. Deductions we can make about even though it is not observed. These include Confidence Intervals Hypotheses of the form H 0 : i = c Hypotheses of the form H 0 : i c Hypotheses of the form H 0 : a ′ = c Hypotheses of the form A = c
E N D
In This Chapter We Will Cover • Deductions we can make about even though it is not observed. These include • Confidence Intervals • Hypotheses of the form H0: i = c • Hypotheses of the form H0: i c • Hypotheses of the form H0: a′ = c • Hypotheses of the form A = c We also cover deductions when V(e) 2I (Generalized Least Squares)
The Variance of the Estimator From these two raw ingredients and a theorem: V(y) = V(X + e) = V(e) = 2I we conclude
What of the Distribution of the Estimator? As normal Central Limit Property of Linear Combinations
So What Can We Conclude About the Estimator? From the V(linear combo) + assumptions about e From the Central Limit Theorem From Ch 5- E(linear combo)
Steps Towards Inference About In general In particular But note the hat on the V! (X′X)-1X′y
Lets Think About the Denominator where dii are diagonal elements of D = (XX)-1 = {dij}
Putting It All Together • Now that we have a t, we can use it for two types of inference about : • Confidence Intervals • Hypothesis Testing
A Confidence Interval for i A 1 - confidence interval for i is given by which simply means that
Graphic of Confidence Interval 1.0 1 - 0 i
Statistical Hypothesis Testing: Step One Generate two mutually exclusive hypotheses: H0: i = c HA: i ≠ c
Statistical Hypothesis Testing Step Two Summarize the evidence with respect to H0:
Statistical Hypothesis Testing Step Three reject H0 if the probability of the evidence given H0 is small
One Tailed Hypotheses Our theories should give us a sign for Step One in which case we might have H0: i c HA: i < c In that case we reject H0 if
A More General Formulation Consider a hypothesis of the form H0: a´ = c so if c = 0… tests H0: 1= 2 tests H0: 1 + 2 = 0 tests H0:
A t test for This More Complex Hypothesis We need to derive the denominator of the t using the variance of a linear combination which leads to
Examples of Multiple df Hypotheses tests H0: 2 = 3 = 0 tests H0: 1 = 2 = 3
Another Way to Think About SSH Assume we have an A matrix as below: We could calculate the SSH by running two versions of the model: the full model and a model restricted to just 1 SSH = SSError (Restricted Model) – SSError (Full Model) so F is
A Hypothesis That All ’s Are Zero If our hypothesis is Then the F would be Which suggests a summary for the model
Generalized Least Squares When we cannot make the Gauss-Markov Assumption that V(e) = 2I Suppose that V(e) = 2V. Our objective function becomes f = eV-1e
SSError for GLS with
GLS Hypothesis Testing H0: i = 0 where dii is the ith diagonal element of (XV-1X)-1 H0: a = c H0: A - c = 0
Accounting for the Sum of Squares of the Dependent Variable e′e = y′y - y′X(X′X)-1X′y SSError = SSTotal - SSPredictable y′y = y′X(X′X)-1X′y + e′e SSTotal = SSPredictable + SSError
SSPredicted and SSTotal Are a Quadratic Forms SSPredicted is And SSTotal yy = yIy Here we have defined P = X(X′X)-1X′
The SSError is a Quadratic Form Having defined P = X(XX)-1X, now define M = I – P, i. e. I - X(XX)-1X. The formula for SSError then becomes
Putting These Three Quadratic Forms Together SSTotal = SSPredictable + SSError yIy = yPy + yMy here we note that I = P + M
M and P Are Linear Transforms of y = Py and e = My so looking at the linear model: Iy = Py + My and again we see that I = P + M
The Amazing M and P Matrices = SSPredicted = y′Py = Py and What does this imply about M and P? e = My and = SSError = y′My
The Amazing M and P Matrices = SSPredicted = y′Py = Py and PP = P MM = M e = My and = SSError = y′My