170 likes | 402 Views
Measurement Theory. Emily H. Wughalter, Ed.D . Measurement & Evaluation Spring 2010. Reliability. Reliability means the consistency in the measurement from one testing to the next trial. Validity.
E N D
Measurement Theory Emily H. Wughalter, Ed.D. Measurement & Evaluation Spring 2010
Reliability • Reliability means the consistency in the measurement from one testing to the next trial.
Validity • Validity means does the measure or the score measure what it purports (is supposed) to measure.
For a measure to be valid it must be reliable; however, a measure can be reliable without it being valid.
Reliability Xobserved XTrue XError
Factors contributing to Reliability • Testing environment should be favorable • Testing environment should be well-organized • Administrator should be competent to run a favorable testing environment • Range of talent • Motivation of the performers
Factors affecting reliability (Cont) • Good day vs bad day • Learning, forgetting, and fatigue • Length of the test • Test difficulty • Ability of the test to discriminate • Number of performers
Factors affecting reliability (Cont) • Nature of measurement device • Selection of scoring unit • Precision • Errors in measurement • Number of trials • Recording errors
Factors affecting reliability (Cont) • Classroom management • Warm-up opportunity
Two Ways to Measure Reliability • Test-Retest (Stability Reliability) • Internal Consistency
Difference Scores (change scores) • Difference or change scores should be avoided because of ceiling and floor effects. • Difference scores are highly unreliable scores
Objectivity • Objectivity means interrater reliability or consistency from one person (rater) to another.
Selecting a Criterion Score • A criterion score represents the score that will be used to represent an individual’s performance • The criterion score may be the mean of a series of trials. • The criterion score may be the best score for a series of trials.
Selecting a Criterion Score • When selecting a criterion measure, whether it is the best score, the mean of all the trials score, or the mean of the best 2 or 3 trials score, a researcher must determine which of the criterion measures represents the most reliable and most valid score.
Causes of measurement error in Kinesiology • Inconsistent scorers • Inconsistent performance • Inconsistent measures • Failure to develop a good set of instructions • Failure to follow testing procedures