360 likes | 1.13k Views
Student’s t test and Nonparametric Statistics OT 667 Hypothesis testing defined A method for deciding if an observed effect or result occurs by chance alone OR if we can argue the results actually happened as a result of an intervention. The Null Hypothesis
E N D
Hypothesis testing defined A method for deciding if an observed effect or result occurs by chance alone OR if we can argue the results actually happened as a result of an intervention.
The Null Hypothesis In order to decide if the results of an experiment occur by chance or if the effects seen are the result of a treatment, researchers declare a null hypothesis (Ho) and an alternative or research hypothesis (Ha).
To test a hypothesis, researchers talk about “rejecting the null” in order to demonstrate the treatment has an effectOR“accepting the null” if the treatment does not have an effect.
When you reject the null, you say that there IS a significant difference between the groups, indicating the likelihood the treatment was effective.
When you accept the null, you say the hypothesis that says there is no difference (which is the null hypothesis) is correct.
Decisions to reject or accept the null…. • Based on whether the calculated value of the statistic performed is equal to or smaller than the critical value of the alpha level (the probability that a certain outcome will be achieved) • By tradition, .05 is the most common alpha level used to make this decision
The research question asked by the t test “Is there a difference on X between the two groups?”
What is the t test? • A parametric statistical test which analyzes the difference between the means of scores between two groups.
Which levels of measurement allow you to calculate a mean? Interval and ratio
Assumptions • There are assumptions about the data that need to be considered when using the t-test. These are • the data is normally distributed • the variances are homogenous or similar • the groups are of equal size
Two kinds of t tests • t test for paired samples - when the subjects are measured on a variable, receive the treatment, then measured again. The pre and post-test means of the measures are compared. Also used with matched pairs and in twin studies. • t test for independent samples - comparison of pre and post treatment means between 2 different groups
Calculating an independent samples t test The difference between the group means divided by the difference between the variability within the groups
The difference between the group means gives you the effect size (the magnitude of the difference between the two groups) The variance gives you the degree of variability within each group
Between group differences and within group differences are important factors to remember - they are used to calculate ANOVA as well as t tests.
Calculating a paired t test mean of the difference scores___ standard error of the difference scores
The number that results from a t test is called the “calculated value” of the test. This number is then compared in a table to the “critical value” using the alpha level set for the study.
Both point and interval estimates (confidence intervals) can be calculated for t tests.
There are different formulas to calculate the t statistic when variances between groups are equal and when they are unequal
Multiple t tests • When you read a study where several t tests are used to test the same data, BEWARE… • For example, when there are repeated measures taken (3 phases) and you see t tests used to assess the differences between the first and second phase, then between the second and third. This means the risk of committing a Type I error (rejecting a true null or finding a difference when there isn’t one) is increased.
Solutions for the problem • Perform an ANOVA • Adjust the alpha level using a Bon Ferroni correction - to do this you half (.025) or lower (.01) the alpha level
Parametric Tests vs. Nonparametric Tests • Parametric tests are based on assumptions made using the normal curve – normal distribution of data and homogenous or similar variances • Nonparametric tests are used when the data is not normally distributed or variances are dissimilar.
Criterion for Using Nonparametric Tests • Assumptions of normal distribution and homogeneity of variances cannot be made • Data is ordinal or nominal • Sample size is small (10 or fewer per group)
Independent samples t test Paired t tests One way ANOVA Factorial ANOVA Mann-Whitney U test Wilcoxon Signed-Ranks Test Sign Test Kruskal-Wallis one way analysis of variance by ranks Friedman Two Way Comparable Parametric and Nonparametric Tests
Hypothesis testing with nonparametric tests is the same procedure as with parametric tests.
Test Power • Parametric tests are seen as more powerful • Are often used with inappropriate data because of this • Need to assess the nature of the data carefully to decide if the appropriate test is being used
Statistical Power • Statistical power is the probability that a test will lead to rejecting the null (saying there IS a difference). • The more powerful a test, the less likely you are to make a Type II error.
Chi Square • Is a nonparametric test • Is used to indicate whether the counts of observed events match theoretical expectations • Used with nominal or interval level data • Data is arranged in “cells” made up of rows and columns – each cell must contain at least 5 counts • The data used must consist of variables that are NOT correlated.
What if proportions are different? • The differences between observed and expected counts are tested to see whether they are large enough to be significant • The differences themselves can be standardized and then cited as standard deviation units
Fisher’s Exact Test • Chi square columns and rows must have 3 or more variables • If only two variables exist, then a test called Fisher’s Exact Test is done • The process is the same as for a chi-square procedure
McNemar’s Test • When nominal and ordinal variables are related, then a test like chi-square can be carried out. • This is called McNemar’s Test.