220 likes | 345 Views
Ordinal Data. Ordinal Tests. Non-parametric tests No assumptions about the shape of the distribution Useful When: Scores are ranks Violated assumptions There are outliers. Frequently Used Ordinal Tests. Spearman’s Rank Correlation Coefficient (Chapter 16) Mann-Whitney U-test
E N D
Ordinal Tests • Non-parametric tests • No assumptions about the shape of the distribution • Useful When: • Scores are ranks • Violated assumptions • There are outliers
Frequently Used Ordinal Tests • Spearman’s Rank Correlation Coefficient (Chapter 16) • Mann-Whitney U-test • Wilcoxon Signed Rank Test • Kruskal Wallis H-Test • Friedman test
Spearman’s Rank Correlation Coefficient (rs) • Designed to measure the relationship between variables measured on an ordinal scale of measurement • Alternative to Pearson correlation • Treatment of ordinal data • Good even if data is interval or ratio • Spearman can be used for nonlinear relationships
Alternative Formula where: n = number of items being ranked d = difference between the X rank and Y rank for each individual
Example Original Data
Mann-Whitney U-test • When to use: • Two independent samples in your experiment • Data have only ordinal properties (e.g. rating scale data) OR there is some other problem with the data • Non-normality • Non-homogeneity of variance
The Test Procedure • We compute two “U” values (UA and UB) based on the sum of the ranks for each sample Where: nA= number in sample A nB= number in sample B RA= sum of ranks group A RB= sum of ranks group B
Worked Out Example DV: amount of food consumed nA: 10 nB: 10
Wilcoxon Signed Ranks Test • Each participant observed twice • Compute difference scores • Analogous to related samples t-test
Preliminary Steps of the Test • Rank difference scores • Compute sum of ranks of “+” and “-” difference scores separately • If tied differences, use tied ranks
Preliminary Steps of the Test • If difference is 0: • ignore and reduce n • do not discard • Compromise: if there’s only one difference score of 0, then we discard it. • If there’s more than one, we divide them evenly into positive and negative ranks. It doesn’t matter which is which, because they’re all 0. • If you have an odd number of 0 differences, then discard one, and divide the rest evenly into positive and negative ranks.
The smaller sum is denoted as T • T = smaller of T+ and T- • IfHo true, sum of “+” and “-” ranks approx. equal
Example Is there enough evidence to conclude that there is a difference in headache hours before and after the new drug? = 0.01
Kruskal-Wallis Test • Used to test for differences between three or more treatment conditions from an independent measures design • Analogous to the one-way independent measures ANOVA EXCEPT data consist of ranks • Does not require the assumption of normally distributed populations Ri, is the sum of ranks for each group N is the total sample size ni is the sample size of the particular group
Friedman Test A nonparametric test invented by Milton Friedman (the Nobel prize winning economist) Used to test for differences between three or more treatment conditions from an dependent measures design Analogous to the one-way repeated measures ANOVA EXCEPT data consist of ranks Follows the 2 distribution when we have at least 10 scores in each of the 3 columns or at least 5 scores in each of 4 columns July 31, 1912 – November 16, 2006
Friedman Test There are specific tables for Friedman’s test statistic for up to k=5 variables Otherwise use chi-square tables because Fr is distributed approximately as chi-square with df= k-1 If 2F>= the tabled value for df =k-1, then the result is significant, and we can say the difference in total ranks between the k conditions is not due to chance variation
Summary Table: Parametric Tests & Their Non-Parametric Counterparts