200 likes | 314 Views
Review. You run a t-test and get a result of t = 0.5. What is your conclusion? Reject the null hypothesis because t is bigger than expected by chance Reject the null hypothesis because t is smaller than expected by chance
E N D
Review You run a t-test and get a result of t = 0.5. What is your conclusion? • Reject the null hypothesis because t is bigger than expected by chance • Reject the null hypothesis because t is smaller than expected by chance • Keep the null hypothesis because t is bigger than expected by chance • Keep the null hypothesis because t is smaller than expected by chance
Review Your null hypothesis states µ = 50, and your sample has a mean of M = 58.If your t statistic equals 4, what is the standard error of the mean (sM)? • Depends on the sample size • 0.5 • 4 • 2
Review Your null hypothesis predicts the population mean should be µ0 = 100. You measure a sample of 25 people and calculate statistics of M = 94 and s = 10. What is the value of your t statistic? • 5 • -0.6 • -3 • 2.4
Hypothesis Testing 10/10
Where Am I? • Wake up after a rough night in unfamiliar surroundings • Still in Boulder? Expected if in Boulder(large likelihood) Surprising but not impossible(moderate likelihood) Couldn’t happen IFin Boulder(likelihood near zero) Can’t be in Boulder
Steps of Hypothesis Testing • State clearly the two hypotheses • Determine which is the null hypothesis (H0) and which is thealternative hypothesis (H1) • Compute a relevant test statistic from the sample • Find the likelihood function of the test statistic according to thenull hypothesis • Choose alpha level (a): how willing you are to abandon null (usually .05) • Find the critical value: cutoff with probability of being exceededunder H0 • Compare the actual result to the critical value • Less than critical value retain null hypothesis • Greater than critical value reject null hypothesis;accept alternative hypothesis
Specifying Hypotheses • Both hypotheses are statements about population parameters • Null Hypothesis (H0) • Always more specific, e.g. 50% chance, mean of 100 • Usually the less interesting, "default" explanation • Alternative Hypothesis (H1) • More interesting – researcher’s goal is usually to support the alternative hypothesis • Less precise, e.g. > 50% chance, > 100
Test Statistic • Statistic computed from sample to decide between hypotheses • Relevant to hypotheses being tested • Based on mean if hypotheses are about means • Based on number correct (frequency) if hypotheses are about probability correct • Sampling distribution according to null hypothesis must be fully determined • Can only depend on data and on values assumed by H0 • Often a complex formula with little intuitive meaning • Inferential statistic: Only used in testing reliability
Likelihood Function • Probability distribution of a statistic according to a hypothesis • Gives probability of obtaining any possible result • Usually interested in distribution of test statistic according to null hypothesis • Same as sampling distribution, assuming the population is accurately described by the hypothesis • Test statistic chosen because we know its likelihood function • Binomial test: Binomial distribution • t-test: tdistribution
Critical Value • Cutoff for test statistic between retaining and rejecting null hypothesis • If test statistic is beyond critical value, null will be rejected • Otherwise, null will be retained • Before collecting data: What strength of evidence will you require to reject null? • How many correct outcomes? • How big a difference between M and m0, relative to sM? • Critical region • Range of values that will lead to rejecting null hypothesis • All values beyond critical value Probability Probability t Frequency
Types of Errors • Goal: Reject null hypothesis when it’s false; retain it when it’s true • Two ways to be wrong • Type I Error: Null is correct but you reject it • Type II Error: Null is false but you retain it • Type I Error rate • IFH0 is true, probability of mistakenly rejecting H0 • Proportion of false theories we conclude are true • E.g., proportion of useless treatments that are deemed effective • Logic of hypothesis testing is founded on controlling Type I Error rate • Set critical value to give desired Type I Error rate
Alpha Level • Choice of acceptable Type I Error rate • Usually .05 in psychology • Higher more willing to abandon null hypothesis • Lower require stronger evidence before abandoning null hypothesis • Determines critical value • Under the sampling distribution of the test statistic according to the null hypothesis, the probability of a result beyond the critical value is Sampling Distribution from H0 a Test Statistic Critical Value
Doping Analogy • Measure athletes' blood for signs of doping • Cheaters have high RBCs, but even honest people vary • What rule to use? • Must set some cutoff, and punish anyone above it • Will inevitably punish some innocent people • H0likelihood function is like distribution of innocent athletes’ RBCs • Cutoff determines fraction of innocent people that get unfairly punished • This fraction is alpha Distribution of Innocent Athletes Don’t Punish Punish RBC
Power • Type II Error rate • IFH0 is false, probability of failing to reject it • E.g., fraction of cheaters that don’t get caught • Power • IFH0 is false, probability of correctly rejecting it • Equal to one minus Type II Error rate • E.g., fraction of cheaters that get caught • Power depends on sample size • Choose sample size to give adequate power • Researchers must make a guess at effect size to compute power Type I error rate (a) H0 H0 Type II error rate Power H1 H1
Two-Tailed Tests • Sometimes want to detect effects in either direction • Drugs that help or drugs that hurt • Formalized in alternative hypothesis • m < m0orm > m0 • Two critical values, one in each tail • Type I error rate is sum from both critical regions • Need to divide errors between both tails • Each gets a/2 (2.5%) Reject H0 Reject H0 a/2 a/2 m0 -tcrit 0 tcrit M t
One-Tailed vs. Two-Tailed Tests One-tailed 0 tcrit t Two-tailed a/2 a/2 a -tcrit 0 tcrit t
An Alternative View: p-values • Reversed approach to hypothesis testing • After you collect sample and compute test statistic • How big must a be to reject H0 • p-value • Measure of how consistent data are with H0 • Probability of a value equal to or more extreme than what you actually got • Large p-value H0 is a good explanation of the data • Small p-value H0 is a poor explanation of the data • p > : Retain null hypothesis • p < : Reject null hypothesis; accept alternative hypothesis • Researchers generally report p-values, because then reader can choose own alpha level • E.g. “p = .03” • If willing to allow 5% error rate, then accept result as reliable • If more stringent, say 1% (a = .01), then remain skeptical tcrit for a = .05 tcrit for a = .01 tcrit for a = .03 t = 2.15 t t p = .03
Review Later this semester, we’ll learn about hypothesis tests for distributions of nominal variables. For example, we’ll poll everyone on their favorite colors and count the frequency for each color. What would be a reasonable null hypothesis? • Each color is chosen by the same number of people in the class • Each color would be chosen by the same number of people in the population • Some colors are more popular than others among people in this class • Some colors are more popular than others among the population
Review If you run a 1-tailed t-test with a sample size of n = 10 and a = .05, the critical value is tcrit = 1.81. Now imagine you ran the same test, but 2-tailed. Which of the following are the new critical values? (You should be able to rule out all wrong answers.) • 1.21, 2.41 • -2.01, 2.45 • -2.23, 2.23 • -1.67, 1.67
Review You run a t-test and get a result of t = 0.56 and p= .32. If your chosen alpha level was 5%, what do you conclude? • Retain the null hypothesis, because p > a • Reject the null hypothesis, because p> a • Retain the null hypothesis, because p< t • Reject the null hypothesis, because p < t