270 likes | 488 Views
Statistical Methods. Variability and Averages The Normal Distribution Comparing Population Variances Experimental Error & Treatment Effects Evaluating the Null Hypothesis Assumptions Underlying Analysis of Variance:. Variability and Averages. Patients. Controls. Graph 1: Bipolar disorder
E N D
Statistical Methods • Variability and Averages • The Normal Distribution • Comparing Population Variances • Experimental Error & Treatment Effects • Evaluating the Null Hypothesis • Assumptions Underlying Analysis of Variance:
Variability and Averages Patients Controls • Graph 1: Bipolar disorder • Different variability • Same averages • Graph 2: Blood sugar levels • Same variability • Different averages Frequency Depressed Manic Patients Controls Frequency Low High
The Normal Distribution • The normal distribution is used in statistical analysis in order to make standardized comparisons across different populations (treatments). • The kinds of parametric statistical techniques we use assume that a population is normally distributed. • This allows us to compare directly between two populations
The Normal Distribution • The Normal Distribution is a mathematical function that defines the distribution of scores in population with respect to two population parameters. • The first parameter is the Greek letter (m, mu). This represents the population mean. • The second parameter is the Greek letter (s, sigma) that represents the population standard deviation. • Different normal distributions are generated whenever the population mean or the population standard deviation are different
The Normal Distribution • Normal distributions with different population variances and the same population mean
m= 1 m= 2 The Normal Distribution • Normal distributions with different population means and the same population variance f(x)
The Normal Distribution • Normal distributions with different population variances and different population means 2 1 s 1 m= = 2 s 3 m= 3 =
Normal Distribution • Most samples of data are normally distributed (but not all)
Comparing Populations in terms of Shared Variances • When the null hypothesis (Ho) is approximately true we have the following: • There is almost a complete overlap between the two distributions of scores
Comparing Populations in terms of Shared Variances • When the alternative hypothesis (H1) is true we have the following: • There is very little overlap between the two distributions
Shared Variance and the Null Hypothesis • The crux of the problem of rejecting the null hypothesis is the fact that we can always attribute some portion of the difference we observe among treatment parameters to chance factors • These chance factors are known as experimental error
Experimental Error • All uncontrolled sources of variability in an experiment are considered potential contributors to experimental error. • There are two basic kinds of experimental error: • individual differences error • measurement error.
Estimates of Experimental Error • In a real experiment both sources of experimental error will influence and contribute to the scores of each subject. • The variability of subjects treated alike, i.e. within the same treatment condition or level, provides a measure of the experimental error. • At the same time the variability of subjects within each of the other treatment levels also offers estimates of experimental error
Estimate of Treatment Effects • The means of the different groups in the experiment should reflect the differences in the population means, if there are any. • The treatments are viewed as a systematic source of variability in contrast to the unsystematic source of variability the experimental error. • This systematic source of variability is known as the treatment effect.
An Example • Two lecturers teach the same course. • Ho: lecturer does not influence exam score. • Experimental design • 10 students: 5 assigned to each lecturer. • IV: Lecturer (A1, A2) • DV: Exam score • Results: • A1: 16, 18, 10,12,19 • A1: Mean=15 • A2: 4, 6, 8, 10, 2 • A2: Mean=6
Partitioning the Deviations • Each of the deviations from the grand mean have specific names • is called the total deviation. • is called the between groups deviation. • is called the within subjects deviation. • Dividing the deviation from the grand mean is known as partitioning
Evaluating the Null Hypothesis • The between groups deviation • represents the effects of both error and the treatment • The within subjects deviation • represents the effect of error alone
Evaluating the Null Hypothesis • If we consider the ratio of the between groups variability and the within groups variability • Then we have
Evaluating the Null Hypothesis • If the null hypothesis is true then the treatment effect is equal to zero: • If the null hypothesis is false then the treatment effect is greater than zero:
Evaluating the Null Hypothesis • The ratio • is compared to the F-distribution
ANOVA • Analysis of variance uses the ratio of two sources of variability to test the null hypothesis • Between group variability estimates both experimental error and treatment effects • Within subjects variability estimates experimental error • The assumptions that underly this technique directly follow on from the F-ratio.
Assumptions Underlying Analysis of Variance: • The measure taken is on an interval or ratio scale. • The populations are normally distributed • The variances of the compared populations are the same. • The estimates of the population variance are independent