320 likes | 467 Views
"Classical" Inference. Two simple inference scenarios. Question 1: Are we in world A or world B?. Possible worlds: World A World B. Jerzy Neyman and Egon Pearson. D : Decision in favor of:. H 0 : Null Hypothesis. H 1 : Alternative Hypothesis.
E N D
Two simple inference scenarios • Question 1: Are we in world A or world B?
D: Decision in favor of: H0: Null Hypothesis H1: Alternative Hypothesis T: The Truth of the matter: H0:Null Hypothesis H1: Alternative Hypothesis
Definition. A subset C of the sample space is a best critical region of size α for testing the hypothesis H0 against the hypothesis H1 if • and for every subset A of the sample space, whenever: • we also have:
Neyman-Pearson Theorem: Suppose that for for some k > 0: Then C is a best critical region of size α for the test of H0 vs. H1.
When the null and alternative hypotheses are both Normal, the relation between the power of a statistical test (1 – ) and is given by the formula • is the cdf of N(0,1), and q is the quantile determined by . • fixes the type I error probability, but increasing n reduces the type II error probability
Question 2: Does the evidence suggest our world is not like World A?
Fisherian theory • Significance tests: their disjunctive logic, and p-values as evidence: • ``[This very low p-value] is amply low enough to exclude at a high level of significance any theory involving a random distribution….. The force with which such a conclusion is supported is logically that of the simple disjunction: Either an exceptionally rare chance has occurred, or the theory of random distribution is not true.'' (Fisher 1959, 39)
Fisherian theory ``The meaning of `H' is rejected at level α' is `Either an event of probability α has occurred, or H is false', and our disposition to disbelieve H arises from our disposition to disbelieve in events of small probability.'' (Barnard 1967, 32)
Fisherian theory: Distinctive features • Notice that the actual data x is used to define the event whose significance is evaluated. • Also based on H0 and H1 • Can only reject H0, evidence cannot allow one to accept H0. • Many other theories besides H0 could alsoexplain the data.
Common philosophical simplification: • Hypothesis space given qualitatively; • H0 vs. –H0, • Murderer was Professor Plum, Colonel Mustard, Miss Scarlett, or Mrs. Peacock • More typical situation: • Very strong structural assumptions • Hypothesis space given by unknown numeric `parameters' • Test uses: • a transformation of the raw data, • a probability distribution for this transformation (≠ the original distribution of interest)
Three Commonly Used Facts • Assume is a collection of independent and identically distributed (i.i.d.) random variables. • Assume also that the Xis share a mean of μ and a standard deviation of σ.
Three Commonly Used Facts • For the mean estimator :
Three Commonly Used Facts • The Central Limit Theorem. If {X1,…, Xn} are i.i.d. random variables from a distribution with mean and variance 2, then: • Equivalently:
Examples • Data: January 2012 CPS • Sample: PhD’s, working full time, age 28-34 • H0: mean income is 75k
21996.00 89999.52 119999.9 40999.92 67600.00 68640.00 96999.76 77296.96 65000.00 71999.72 100100.0 45999.72 149999.7 19968.00 10140.00 37999.52 74999.60 69992.00 31740.80 65000.00 57512.00 87984.00 35999.60 38939.68 99999.64 74999.60 149999.7 47996.00 62920.00 62920.00 54999.88 104000.0
Hyp. Value Probability • H0 -1.024022 0.3138
Comments • The background conditions (e.g., the i.i.d. condition behind the sample) are a clear example of `Quine-Duhem’ conditions. • When background conditions are met, ``large samples’’ don’t make inferences ``more certain’’ • Multiple tests • Monitoring or ``peeking'‘ at data, etc.
Many desiderata of an estimator: • Consistent • Maximum Likelihood • Unbiased • Sufficient • Minimum variance • Minimum MSE (mean squared error) • (most) efficient
By CLT: approximately: • Thus: • By algebra: • So:
Interpreting confidence intervals • The only probabilistic component that determines what occurs is . • Everything else are constants. • Simulations, examples • Question: Why ``center’’ the interval?
Confidence Intervals • $68,898.16 ± $12,152.85 • ``C.I. = mean ± m.o.e’’ • = ($56,745.32 , $81,051.01)
Using similar logic, but different computing formulae, one can extend these methods to address further questions • e.g., for standard deviations, equality of means across groups, etc.
Equality of Means: BAs • Sex Count Mean Std. Dev. • 1 223 63619.54 31370.01 • 2 209 51395.43 25530.66 • All 432 57705.56 29306.13 • Value Probability • 4.424943 0.0000
Equality of Means: PhDs • Sex Count Mean Std. Dev. • 21 66452.71 36139.78 • 11 73566.76 29555.10 • All 32 68898.16 33707.49 • Value Probability • -0.560745 0.5791