270 likes | 311 Views
Chapter 22 Using Inferential Statistics to Test Hypotheses. Inferential Statistics. A means of drawing conclusions about a population (i.e., estimating population parameters), given data from a sample Based on laws of probability. Sampling Distribution of the Mean.
E N D
Inferential Statistics • A means of drawing conclusions about a population (i.e., estimating population parameters), given data from a sample • Based on laws of probability
Sampling Distribution of the Mean • A theoretical distribution of means for an infinite number of samples drawn from the same population • Is always normally distributed • Has a mean that equals the population mean • Has a standard deviation (SD) called the standard error of the mean (SEM) • SEM is estimated from a sample SD and the sample size
Statistical Inference—Two Forms • Estimation of parameters • Hypothesis testing (more common)
Estimation of Parameters • Used to estimate a single parameter (e.g., a population mean) • Two forms of estimation: • Point estimation • Interval estimation
Point Estimation Calculating a single statistic to estimate the population parameter (e.g., the mean birth weight of infants born in the U.S.)
Interval Estimation • Calculating a range of values within which the parameter has a specified probability of lying • A confidence interval (CI) is constructed around the point estimate • The upper and lower limits are confidence limits
Hypothesis Testing • Based on rules of negative inference: research hypotheses are supported if null hypotheses can be rejected • Involves statistical decision making to either: • accept the null hypothesis, or • reject the null hypothesis
Hypothesis Testing (cont’d) • Researchers compute a test statistic with their data, then determine whether the statistic falls beyond the critical region in the relevant theoretical distribution • If the value of the test statistic indicates that the null hypothesis is “improbable,” the result is statistically significant • A nonsignificant result means that any observed difference or relationship could have resulted from chance fluctuations
Statistical Decisions are Either Correct or Incorrect Two types of incorrect decisions: • Type I error: a null hypothesis is rejected when it should not be rejected • Risk of a Type I error is controlled by the level of significance (alpha), e.g., = .05 or .01. • Type II error: failure to reject a null hypothesis when it should be rejected
One-Tailed and Two-Tailed Tests Two-tailed tests Hypothesis testing in which both ends of the sampling distribution are used to define the region of improbable values One-tailed tests Critical region of improbable values is entirely in one tail of the distribution—the tail corresponding to the direction of the hypothesis
Critical Region in the Sampling Distribution for a One-Tailed Test: IVF Attitudes Example
Critical Regions in the Sampling Distribution for a Two-Tailed Test: IVF Attitudes Example
Parametric Statistics • Involve the estimation of a parameter • Require measurements on at least an interval scale • Involve several assumptions (e.g., that variables are normally distributed in the population)
Nonparametric Statistics (Distribution-Free Statistics) • Do not estimate parameters • Involve variables measured on a nominal or ordinal scale • Have less restrictive assumptions about the shape of the variables’ distribution than parametric tests
Overview of Hypothesis-Testing Procedures • Select an appropriate test statistic • Establish the level of significance (e.g., = .05) • Select a one-tailed or a two-tailed test • Compute test statistic with actual data • Calculate degrees of freedom (df) for the test statistic
Overview of Hypothesis-Testing Procedures (cont’d) • Obtain a tabled value for the statistical test • Compare the test statistic to the tabled value • Make decision to accept or reject null hypothesis
Commonly Used Bivariate Statistical Tests • t-Test • Analysis of variance (ANOVA) • Pearson’s r • Chi-square test
t-Test Tests the difference between two means • t-Test for independent groups (between subjects) • t-Test for dependent groups (within subjects)
Analysis of Variance (ANOVA) • Tests the difference between 3+ means • One-way ANOVA • Multifactor (e.g., two-way) ANOVA • Repeated measures ANOVA (within subjects)
Correlation • Pearson’s r, a parametric test • Tests that the relationship between two variables is not zero • Used when measures are on an interval or ratio scale
Chi-Square Test • Tests the difference in proportions in categories within a contingency table • A nonparametric test
Power Analysis • A method of reducing the risk of Type II errors and estimating their occurrence • With power = .80, the risk of a Type II error () is 20% • Method is frequently used to estimate how large a sample is needed to reliably test hypotheses
Power Analysis (cont’d) Four components in a power analysis: • Significance criterion (α) • Sample size (N) • Population effect size—the magnitude of the relationship between research variables (γ) • Power—the probability of obtaining a significant result (1-β)