1 / 43

Hypothesis test flow chart

Hypothesis test flow chart. χ 2 test for i ndependence (19.9) Table I. Test H 0 : r =0 (17.2) Table G . n umber of correlations. n umber of variables. f requency data. c orrelation (r). 1. 2. Measurement scale. 1. 2. b asic χ 2 test (19.5) Table I .

siran
Download Presentation

Hypothesis test flow chart

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hypothesis test flow chart χ2 test for independence (19.9) Table I Test H0: r=0 (17.2) Table G number of correlations number of variables frequency data correlation (r) 1 2 Measurement scale 1 2 basic χ2 test (19.5) Table I Test H0: r1= r2 (17.4) Tables H and A START HERE Means 2-way ANOVA Ch 21 Table E z -test (13.1) Table A More than 2 1 2 number of means Yes Do you know s? number of factors No 2 1 t -test (13.14) Table D 1-way ANOVA Ch 20 Table E independent samples? Yes No Test H0: m1= m2 (15.6) Table D Test H0: D=0 (16.4) Table D

  2. -3 -2 -1 0 1 2 3 z Chapter 13: Interpreting the Results of Hypothesis Testing ‘statistically significant’ does not mean ‘important’ IQ’s of UW undergraduates Suppose we measured the IQ’s of 10,000 UW undergraduates and found a mean IQ of 100.3. If we were to conduct a one-tailed z-test to determine if this mean is greater than the US population that has a mean of 100 and a standard deviation of 15. z=2 area = a = .05 We’d find that we could reject H0 with a=.05. But is a difference of 0.3 IQ points important?

  3. If you want to read a lot about statistically significant results that may or may not be important…

  4. Some journals require the authors to report the ‘effect size’, along with the outcomes of statistical tests to let the reader interpret whether the effect is ‘big’ enough to be important. Remember, to calculate t, we divide by the standard error of the mean: But the standard error of the mean shrinks with increasing n. We need a measure of the size of the difference between our observation and the null hypothesis that doesn’t depend on experimental parameters like n.

  5. Effect size: the difference between our observation and the null hypothesis in terms of standard deviations. Formally: effect size is “an estimate of the degree to which the treatment effect is present in the population, expressed as a number free of the original measurement unit”. One example of effect size is Cohen’s d: Where mhyp is the mean for the null hypothesis. This is just like converting the sample mean to a z score. A more common example is Hedge’s g, which is used when we don’t know the standard deviation of the population. It’s our best estimate of Cohen’s d: This is just like calculating a value for the t-distribution except we divide by sXinstead of the standard error of the mean

  6. Back to our made-up IQ example where we had a mean of 100.3 and a standard deviation of 15 The effect size is: The study found that UW IQ’s are only 0.02 standard deviations above 100. This is a small effect size, even though it is statistically significant.

  7. Reporting effect size has the advantage that since it doesn’t depend on n, the value is more easily compared across studies. 0.8 0.5 0.2 A conventional interpretation of effect size is that (in absolute value): 0.8 is large, 0.5 is medium 0.2 is small.

  8. There are two unavoidable types of errors in hypothesis testing: type I and type II errors. True state of the world HO is true HO is false Correctly fail to reject HO (1-a) Type II Error (b) Fail to reject HO Decision based on your sample Correctly reject H0 (1-b = power) Type I Error (a) Reject HO A Type I error is when we reject H0 when it is actually true. Pr(Type I error) = a A Type II error is when we fail to reject H0 even though it false. Pr(Type II error) = b More commonly, we talk about the probability of correctly rejecting H0, The probability of this happening is called power: Power = Pr(correct rejection of HO) = 1-b.

  9. True state of the world HO is true HO is false Type II Error (b) Correctly fail to reject HO (1-a) Fail to reject HO Decision based on your sample Correctly reject H0 (1-b = power) Type I Error (a) Reject HO

  10. Type I errors (a) -4 -3 -2 -1 0 1 2 3 4 z score A Type I error occurs when our statistic (z or t) falls within the region or rejection even though the null hypothesis is true. For example, for a one-tailed z-test using a = .05, the distribution of z scores and the rejection regions look like this: Pr(Type I error) = a Alpha (a) is therefore the probability that a Type I error will occur.

  11. Type II errors At Type II error happens when the null hypothesis is false but you fail to reject it anyway. To calculate the probability of a type II error, we need to know the true distribution of the population. This is weird because the true distribution of the population is the thing we’re trying to figure out in the first place.

  12. Type II errors: beta (b) and power (1-b) Type II errors happen only if the null hypothesis is false. For example, suppose we’re conducting a one-tailed z-test with a = .05, and the true population mean has a mean z score of 1 (mtrue = 1). We still use the same critical value that we did under the null hypothesis. But now the distribution of z-values is centered around z=1. mtrue = 1 Zcrit = 1.645 mhyp = 0 1-b = power (blue shaded area) a = Pr(type I error) (red shaded area) -3 -2 -1 0 1 2 3 4 5 z-score The blue shaded region is the probability of correctly rejecting the null hypothesis. Type II errors happen when z falls outside the rejection region, so the probability of making a Type II error is 1- blue shaded area.

  13. Type II errors: beta (b) mtrue = 1 Zcrit = 1.645 mhyp = 0 a (red shaded area) -3 -2 -1 0 1 2 3 4 5 z-score 1-b = power (blue shaded area) Calculating power, the probability of correctly rejecting HO 1) Find the rejection region under the null hypothesis: With a = .05, zcrit = 1.645 (Table A, column C), so the rejection region is z>1.645 2) The new rejection region will by shifted down by utrue – uhyp = 1 1.645-1 = 0.645, so the new rejection region is z>0.645 3) Find the area in the new rejection region The power is the area for z above 0.645 is .2611 (Table A, Column C)

  14. power = 1-b mtrue = 1 Zcrit = 1.645 mhyp = 0 a (red shaded area) -3 -2 -1 0 1 2 3 4 5 1-b = power (blue shaded area) z-score Power is the probability of correctly rejecting the null hypothesis, which is the area in the rejection region. Power in this example is: Pr(z>0.645) = 1-b = .2611 More power is good. Power is the probability of correctly finding an effect in your experiment. A ‘desirable’ level of power is .8

  15. Example: Between 1930 and 1980 the year-to-year average temperature in the Northern Hemisphere varied with a standard deviation of 0.2482 degrees. Consider this to be the population standard deviation for temperatures. Suppose we wanted to calculate the mean of the temperatures over a random sample of 12 years. What is the standard error of this mean? -0.2 -0.1 0 0.1 0.2 Temperature (deg)

  16. Example: Between 1930 and 1980 the year-to-year average temperature in the Northern Hemisphere varied with a standard deviation of 0.2482 degrees. Consider this to be the population standard deviation for temperatures. (These values were taken from the NOAA website) What temperature above the mean exceeds 99% of all temperatures? From Table A, Pr(z>2.33) = .01 x = (2.33)(.0716) = .1668 degrees z = .1668 -0.2 -0.1 0 0.1 0.2 Temperature (deg)

  17. x = (2.33)(.0716) = .1668 degrees If we were to calculate the temperature averaged over the 12 years from 2000 to 2011, there is a 1% chance that we’d find an increase of .1668 degrees or more from 1930 to 1980. In other words, if null hypothesis is true, that the temperature has not increased since the period 1930 to 1980, we’d make a Type I error if we found an average temperature increase of .1668 degrees or more. The probability of this type I error is 1% z = .1668 -0.2 -0.1 0 0.1 0.2 Temperature (deg)

  18. x = (2.33)(.0716) = .1668 degrees Now suppose that you actually measured that the average temperature over the years 2000 to 2011 has increased by 0.3 degrees above the average from 1930 to 1980. If we assume an alpha value of .01, what is the probability that this is a type II error? In other words, given a new normal distribution with standard deviation of .1668 and a mean of 0.3, what is the probability that a sample will be less than the old critical value of .1668? z = .1668 Pr(x<.1668) = Pr(z< (.1668-.3)/.0716)) = Pr(z<-1.86) = .0314 0.1 0.2 0.3 0.4 0.5 Temperature (deg)

  19. That is, if the real temperature increase is 0.3 degrees, we would fail to detect it 3.14% of the time. If the probability of a type II error is.0314, then what is the probability that we will correctly detect a temperature change of 0.3 degrees or higher? Pr(correct rejection of Ho) is 1-Pr(Type II error) = 1-.0314 = .9686 The probability of a correct rejection of Ho is the power of the test. The power here is .9686. We will correctly reject Ho 96.86% of the time. z = .1668 0.1 0.2 0.3 0.4 0.5 Temperature (deg)

  20. We just showed that we have a 96.86% chance of correctly detecting a 0.3 average temperature increase since 2000. 2.5 2 The actual measured temperature increase has been 1.67 degrees since 1930-1970. 1.5 The probability of this happening under the null hypothesis is Pr(z < -21). Not quite impossible, but effectively so. 1 Relative temperature (deg) 0.5 0 -0.5 -1 1930 1940 1950 1960 1970 1980 1990 2000 2010 Years

  21. Example: IQs are normally distributed with a mean of 100 and a standard deviation of 15. Suppose you sampled 100 students and calculated a sample mean and are about to test for a significant increase in IQ using a one-tailed z-test using a=.05. What is the power of this test under the assumption that the true population mean for the group that we’re sampling is 103? Answer: First, we’ll convert everything to z-scores. This makes mhyp = 0 (always), and

  22. -4 -3 -2 -1 0 1 2 3 4 z-score To calculate power: 1) Find the critical value of t under null hypothesis: With a = .05, zcrit = 1.64 (Table A, column C), so the rejection region is z > 1.64 2) The new rejection region will by shifted over by utrue – uhyp= 2-0 = 2 z > 1.64-2, which is z`> -.36 3) Find 1-b, the area in the new rejection region Pr(z > -.36) = .1406+.5 = .6406 A power of .6406 means that there is a 64.06% chance of correctly rejecting the null hypothesis (or not making a type II error). power = 1-b = .6406

  23. Things that affect power: Variability of the measure mtrue = 1.0 a=.05 Power increases as the standard error of the mean decreases. Ways to decrease the standard error of the mean: Increase the sample size Make more accurate measurements

  24. Things that affect power: level of significance (a) mtrue = 1.0 power =0.2595 power =0.1685 -3 -2 -1 0 1 2 3 4 5 -3 -2 -1 0 1 2 3 4 5 z score z score power =0.0924 power =0.0183 -3 -2 -1 0 1 2 3 4 5 -3 -2 -1 0 1 2 3 4 5 z score z score a=? Power decreases as alpha (a) decreases. a=.05 a=.025 a=.01 a=.001 This is a classic tradeoff: The less willing we are to make a Type I error, the more likely we are going to make a Type II error.

  25. Things that affect power: difference between utrue and uhyp mtrue = ? power =0.0815 power =0.1261 -3 -2 -1 0 1 2 3 4 5 -3 -2 -1 0 1 2 3 4 5 z score z score power =0.2595 power =0.9907 -3 -2 -1 0 1 2 3 4 5 -3 -2 -1 0 1 2 3 4 5 6 7 8 z score z score a=.05 Power increases with effect size: as the difference between means for the true population and the null hypothesis increases. mtrue = 0.25 mtrue = 0.5 mtrue = 4.0 mtrue = 1.0 We don’t have control over this: mtrue is the one thing we don’t know (but want to estimate).

  26. Power curve: shows how power increases with effect size Two-tail a=.05 1 Sample size = 50 0.9 0.8 0.7 0.6 Power 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Effect size (d)

  27. a = 0.01, 1-tail, 1 mean 1 0.9 1000 500 0.8 250 150 100 0.7 75 50 0.6 40 30 Power 25 0.5 20 15 0.4 12 10 n=8 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  28. a = 0.05, 1-tail, 1 mean 1 0.9 1000 500 250 0.8 150 100 75 0.7 50 40 30 25 0.6 20 15 Power 12 0.5 10 n=8 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  29. a = 0.01, 2-tails, 1 mean 1 0.9 1000 500 0.8 250 150 0.7 100 75 0.6 50 40 Power 30 0.5 25 20 0.4 15 12 10 0.3 n=8 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  30. a = 0.05, 2-tails, 1 mean 1 0.9 1000 500 250 0.8 150 100 0.7 75 50 40 0.6 30 25 20 Power 0.5 15 12 10 0.4 n=8 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  31. a = 0.01, 1-tail, 2 means 1 0.9 1000 0.8 500 250 0.7 150 100 0.6 75 Power 50 0.5 40 30 0.4 25 20 15 0.3 12 10 n=8 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  32. a = 0.05, 1-tail, 2 means 1 0.9 1000 500 0.8 250 150 0.7 100 75 50 0.6 40 30 Power 25 0.5 20 15 12 0.4 10 n=8 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  33. a = 0.01, 2-tails, 2 means 1 0.9 1000 0.8 500 250 0.7 150 0.6 100 75 Power 0.5 50 40 0.4 30 25 20 0.3 15 12 0.2 10 n=8 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  34. a = 0.05, 2-tails, 2 means 1 0.9 1000 500 0.8 250 0.7 150 100 75 0.6 50 40 Power 0.5 30 25 20 0.4 15 12 10 0.3 n=8 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d)

  35. Example: Suppose we’re conducting a two-tailed t-test with one mean with a = .05 with a sample size of n=50. How much of an effect size do we need to obtain a power value of 0.8? Answer: Looking at the appropriate family of power curves, the curve with n=50 passes through a power value of 0.8 when the effect size is 0.4. Example: Suppose we’re conducting a one-tailed t-test with one mean with a = .01 and we have an effect size of 0.6. How large of a sample size do we need to get a power value of 0.8? Answer: Looking at the appropriate family of power curves, looking at a power value of 0.4, the curve with n=30 passes through a power value of 0.8.

  36. Example:You decide to sample the test scores of 63 dazzling cats from a population and obtain a mean test scores of 25.6 and a standard deviation of 2.77. Using an alpha value of α = 0.01, is this observed mean significantly different than an expected test scores of 25? What is the effect size? What is the power?

  37. Example:You decide to sample the test scores of 63 dazzling cats from a population and obtain a mean test scores of 25.6 and a standard deviation of 2.77. Using an alpha value of α = 0.01, is this observed mean significantly different than an expected test scores of 25? What is the effect size? What is the power? Answer:(Two tailed t-test for one mean) We fail to reject H0 (t(62) = 1.72, tcrit = ±2.6575). The test scores of dazzling cats is not significantly different than 25. Effect size: 0.2166 Power = 0.1759

  38. Power curves for independent samples t-test (2 means): Remember, the power of a test is the probability of correctly rejecting H0 when it is false. Power depends on a, nX, nY, and the effect size.

  39. Example (again): The heights of the 45 students in Psych 315 with fathers above 70 inches has a mean of 66.8 inches and a standard deviation of 4.14 inches. The heights of the remaining 51 students has a mean of 64.9 and a standard deviation of 3.62 inches. What is the effect size? (use a = .05, two tailed) This is a medium effect size. Remember it was a ‘significant’ t-test. Assuming that our observed mean is the true population mean, what is the power of this test? How large of a sample would we need to obtain an effect size of 0.8?

  40. a = 0.05, 2-tails, 2 means 1 0.9 1000 500 0.8 250 0.7 150 100 75 0.6 50 40 Power 0.5 30 25 20 0.4 15 12 10 0.3 n=8 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Effect size (d) For an effect size of 0.49 and sample sizes of about 50, the power is about .69. We’d need between 50 and 75 subjects per group to get an effect size of 0.8.

  41. Example: Suppose we’re conducting a two-tailed t-test with a = .05 with a sample size of n=50. How much of an effect size do we need to obtain a power value of 0.8? Answer: Looking at the appropriate family of power curves, the curve with n=50 passes through a power value of 0.8 when the effect size is 0.4. Example: Suppose we’re conducting a one-tailed t-test with a = .01 and we have an effect size of 0.6. How large of a sample size do we need to get a power value of 0.8? Answer: Looking at the appropriate family of power curves, looking at a power value of 0.4, the curve with n=30 passes through a power value of 0.8.

  42. Example:You decide to sample the test scores of 63 dazzling cats from a population and obtain a mean test scores of 25.6 and a standard deviation of 2.77. Using an alpha value of α = 0.01, is this observed mean significantly different than an expected test scores of 25? What is the effect size? What is the power?

  43. Example) You decide to sample the test scores of 63 dazzling cats from a population and obtain a mean test scores of 25.6 and a standard deviation of 2.77. Using an alpha value of α = 0.01, is this observed mean significantly different than an expected test scores of 25? What is the effect size? What is the power? Answer) (Two tailed t-test for one mean) We fail to reject H0 (t(62) = 1.72, tcrit = ±2.6575). The test scores of dazzling cats is not significantly different than 25. Effect size: 0.2166 Power = 0.1759

More Related