1 / 17

Measurement Theory

Measurement Theory. Emily H. Wughalter, Ed.D . Measurement & Evaluation Spring 2010. Reliability. Reliability means the consistency in the measurement from one testing to the next trial. Validity.

quang
Download Presentation

Measurement Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measurement Theory Emily H. Wughalter, Ed.D. Measurement & Evaluation Spring 2010

  2. Reliability • Reliability means the consistency in the measurement from one testing to the next trial.

  3. Validity • Validity means does the measure or the score measure what it purports (is supposed) to measure.

  4. For a measure to be valid it must be reliable; however, a measure can be reliable without it being valid.

  5. Reliability Xobserved XTrue XError

  6. Xobserved = Xtrue+XError

  7. Factors contributing to Reliability • Testing environment should be favorable • Testing environment should be well-organized • Administrator should be competent to run a favorable testing environment • Range of talent • Motivation of the performers

  8. Factors affecting reliability (Cont) • Good day vs bad day • Learning, forgetting, and fatigue • Length of the test • Test difficulty • Ability of the test to discriminate • Number of performers

  9. Factors affecting reliability (Cont) • Nature of measurement device • Selection of scoring unit • Precision • Errors in measurement • Number of trials • Recording errors

  10. Factors affecting reliability (Cont) • Classroom management • Warm-up opportunity

  11. Two Ways to Measure Reliability • Test-Retest (Stability Reliability) • Internal Consistency

  12. Difference Scores (change scores) • Difference or change scores should be avoided because of ceiling and floor effects. • Difference scores are highly unreliable scores

  13. Objectivity • Objectivity means interrater reliability or consistency from one person (rater) to another.

  14. Selecting a Criterion Score • A criterion score represents the score that will be used to represent an individual’s performance • The criterion score may be the mean of a series of trials. • The criterion score may be the best score for a series of trials.

  15. Selecting a Criterion Score • When selecting a criterion measure, whether it is the best score, the mean of all the trials score, or the mean of the best 2 or 3 trials score, a researcher must determine which of the criterion measures represents the most reliable and most valid score.

  16. Causes of measurement error in Kinesiology • Inconsistent scorers • Inconsistent performance • Inconsistent measures • Failure to develop a good set of instructions • Failure to follow testing procedures

More Related