140 likes | 278 Views
Chapter 6. Making Sense of Statistical Significance: Decision Errors, Effect Size and Statistical Power Part 1: Sept. 24, 2013. Decision Errors. Due to use of samples to estimate effects in populations Type I error Reject the null hypothesis when in fact it is true Ex?
E N D
Chapter 6 Making Sense of Statistical Significance: Decision Errors, Effect Size and Statistical Power Part 1: Sept. 24, 2013
Decision Errors • Due to use of samples to estimate effects in populations • Type I error • Reject the null hypothesis when in fact it is true • Ex? • Equal to alpha (α) – probability of Type 1 error • α = .05, run a 5% risk of making a type 1 error • Type II error • Not rejecting the null hypothesis when in reality it is false (being too conservative) • Ex? • Equal to beta (β) - probability of making a Type II error
Effect Size • We may reject the null and conclude there is a significant effect, but how large is it? • Effect size will estimate that; it is the amount that two populations (from our sample vs. comparison population) do not overlap • Figuring effect size (d) • Example: Population SD 1 = experimental group (use M from sample)2 = population or comparison group mean
Effect Size • Effect size conventions – make conclusions about how large/important effect is • small around d = .2 (or -.2) • medium around d = .5 (or -.5) • large around d = .8 (or -.8) Example interpretation? • Effect size speaks to ‘practical significance’ – an indication of the importance of a statistically significant effect
Effect Size Interpretation • What is a desired effect size? • Interpretation: • For an experiment… • For a group comparison… • For a correlational study…
Meta-Analysis • Combine results from multiple studies • How are effect sizes used here? • Example:
Statistical Power • Probability that the study will produce a statistically significant result when the research hypothesis is in fact true • That is, what is the power to correctly reject the null? • Upper right quadrant in decision table • Want to maximize our chances that our study has the power to find a true/real result • Can calculate power before the study using predictions of means
Statistical Power • Steps for figuring power 1. Gather the needed information: (N=16) * Mean & SD of comparison distribution (the distrib of means from Ch 5 – now known as Pop 2) * Predicted mean of experimental group (now known as Pop 1) * “Crashed” example: Pop 1 “crashed group” mean = 5.9 Pop 2 “neutral group/comparison pop” μ = 5.5, = .8, m = sqrt (2)/N m = sqrt[(.8 2) / 16] = .2
Statistical Power 2. Figure the raw-score cutoff point on the comparison distribution to reject the null hypothesis (using Pop 2 info) • For alpha = .05, 1-tailed test (remember we predicted the ‘crashed’ group would have higher fault ratings), z score cutoff = 1.64. • Convert z to a raw score (x) = z(m) + μ x = 1.64 (.2) + 5.5 = 5.83 • Draw the distribution and cutoff point at 5.83, shade area to right of cutoff point “critical/rejection region”
Statistical Power 3. Figure the Z score for this same point, but on the distribution of means for Population 1 (see ex on board) • That is, convert the raw score of 5.83 to a z score using info from pop 1. • Z = (x from step 2 - from step 1exp group) m (from step 1) • (5.83 – 5.9) / .2 = -.35 • Draw another distribution & shade in everything to the right of -.35
Statistical Power • Use the normal curve table to figure the probability of getting a score higher than Z score from Step 3 • Find % betw mean and z of -.35 (look up .35)… = 13.68% • Add another 50% because we’re interested in area to right of mean too. • 13.68 + 50 = 63.68%…that’s the power of the experiment.
Power Interpretation • Our study (with N=16) has around 64% power to find a difference between the ‘crashed’ and ‘neutral’ groups if it truly exists. • Based on our estimate of what the ‘crashed’ mean will be (=5.9), so if this is incorrect, power will change. • In decision error table 1-power = beta (aka…type 2 error), so here: • Alpha = .05 (5% chance of incorrectly rejecting Null); Power = .64 (64% chance of correctly rejecting a false N); Beta = .36 (36% chance of incorrectly failing to reject N)