1 / 38

Implications and Extensions of Rasch Measurement

Implications and Extensions of Rasch Measurement. New Rules of Measurement. The Rasch model has introduced several new “rules” of measurement, which are in stark contrast to the old rules. Rule 1: Standard Errors. Old Rule

teagan
Download Presentation

Implications and Extensions of Rasch Measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implications and Extensions of Rasch Measurement

  2. New Rules of Measurement • The Rasch model has introduced several new “rules” of measurement, which are in stark contrast to the old rules.

  3. Rule 1: Standard Errors • Old Rule • The standard error of measurement applies to all scores in a population • "if the score distribution approaches normality, and if obtained scores do not extend over the entire possible range, the standard error of measurement is probably uniform at all score levels" (Guilford, 1965 p. 445). • New Rule: • The standard error of measurement varies across persons with different abilities/trait levels

  4. Standard Error Across the Measurement Range

  5. Implications of Rule 1 • In classical test theory, standard errors of raw scores can lead one to believe that zero and perfect scores are perfectly estimated! • The opposite is the case in Rasch measurement. • In Rasch, each examinee measure has its own standard error, irrespective of who, if any one, takes the same test.

  6. Rule 2: Test Length and Reliability • Old Rule: • Longer tests are more reliable • New Rule: • Shorter tests can be more reliable than longer tests. • While a longer test with the same sort of items is more reliable, this does not preclude the possibility that a shorter test with different items could be equally or more reliable.

  7. Rule 3: Interchangeable Test Forms • Old Rule: • Comparing scores from different forms of an instrument requires test parallelism. • Test forms must be comparable in item difficulty. • New Rule: • Equating test forms that vary in item difficulty is not only possible, but it results in better estimation of trait levels.

  8. Rule 4: Item Properties • Old Rule: • Unbiased assessment of item properties (I.e., difficulty) requires representative samples from the target population. • New Rule: • Unbiased estimates of item properties may be obtained from unrepresentative samples.

  9. Rule 4 • Bias: incorrect decisions due to poor test-to-sample targeting. • Representative: The sample trait distribution matches the distribution of the population. • In Rasch measurement, unbiased estimates of item difficulty parameters can be obtained regardless of the way in which person measures are distributed.

  10. Rule 5: Meaningful Measures • Old Rule: • Meaningful interpretations of scores are obtained by comparing scores relative to a distribution (standardization sample). • Conversion of scores into t scores, percentiles. • New Rule: • Meaningful interpretations of measures are obtained by comparing the distance of measures to various items. • Item and person maps.

  11. Rule 6: Interval Measurement • Old Rule: • Interval measurement is achieved to the extent that items produce normally distributed scale scores. • New Rule: • Interval measurement is achieved to the extent that the data fit the Rasch model.

  12. Summary • The Rasch model with its new rules of measurement make it possible to: • Achieve measurement that is free of the distributional properties of samples of persons and items. • More easily equate different instrument forms • Analyze an item’s characteristics irrespective of other items or sample characteristics. • Create better and shorter instruments, including: • Computerized adaptive testing

  13. Computerized Adaptive Testing

  14. Short vs. Long Instruments • Floor and ceiling effects • Limited content validity • Lack precision • Burden on respondent • Redundant information • May lack specificity Difficult to crosswalk without common items Short Long Instrument Length

  15. Computer Adaptive Testing • A CAT works much like a trained clinical interviewer: • Selects questions based on the client’s previous responses. • Can cover a broad range of potential problems/diagnoses quickly. • Continues to ask questions until sufficient information for a diagnosis has been obtained.

  16. Benefits of CAT & Item Banking Respondent Burden Tailoring/ Specificity CAT Coverage of content domains Floor and ceiling effects Item Bank

  17. Item Banking Items for Instrument A Items for Instrument B Items for Instrument C Items for Instrument A Item Pool Item Pool Calibrate items based on data collected from representative sample Rasch/IRT Item Bank

  18. CAT and the Rasch Model • The Rasch model is ideal as the underlying measurement model for CAT: • Standard errors can be estimated for each respondent independent of other respondents (Rule 1). • Shorter tests can be as reliable as longer tests (Rule 2). • CAT-based measures can be equated regardless of the items administered in each CAT session (Rule 3).

  19. Benefits of CAT • CAT provides a way to obtain precise measures while minimizing respondent burden. • Measures obtained with CAT can be directly compared even though respondents receive different sets of items. • Instruments measuring the same construct can be combined to form a larger item bank.

  20. Benefits of CAT • CAT of course shares the benefits of computer-based testing: • Standardized scoring procedures • Automated data entry • Immediate feedback • Automatic report generation • Greater privacy

  21. How Does CAT Work?

  22. CAT Process Typical Pattern of Responses Increased Difficulty • Score is calculated and the next best item is selected based on item difficulty Middle Difficulty +/- 1 Std. Error Decreased Difficulty Correct Incorrect

  23. Logical Components of CAT • Start Rule • Item Selection • Measure Estimation • Stop Rule(s)

  24. The Start Rule • Used to select first item • What measure is assigned to the respondent prior to selecting the first item? • Can be an arbitrary value (0 on the logit scale) or can be based on previously gathered information.

  25. Item Selection • Several methods available. • Common approach is to select item providing maximum information relative to the current measure. • Can be modified to include other criteria: • Content domains • Items needed for diagnosis

  26. Item Information Item Difficulty = 0.5 Maximum information, Trait level = 0.5 too easy too difficult

  27. Item Selection Item 2 Select Item 1 Item 3

  28. Estimating the Measure • Once an item is selected and a response to the item is obtained, the CAT system will re-estimate the respondent’s measure and the standard error of measurement. • As with all Rasch measures, the measure estimated by CAT is on a logit scale ranging form negative to positive infinity.

  29. Estimation Methods • Maximum Likelihood • No distributional assumptions • Cannot estimate measures with 0 or perfect scores. • Bayesian • Assumes the latent trait has a given distribution, e.g., normal distribution • Easier to program • Provides estimates of persons with extreme (0 or perfect) scores. • Measures at the extremes are biased.

  30. Stop Rules • Determines when sufficient information has been collected • Types of Stop Rules • Measurement precision • Number of items administered • Test-taking time • Some combination of the above

  31. Are CAT and Paper and Pencil Tests Equivalent? • Numerous studies have documented the equivalence of paper-and-pencil and CAT administration, including: • Equal ability estimates (Bergstrom, 1992) • Equal variances • High correlations (> .90) • CATs provide comparable and in some cases improved construct and predictive validity

  32. How Many Items? • Short Answer: The more, the better. • Not uncommon to have hundreds of items in an item bank. • Number of items will depend on • Stop rule used • Number of constructs or domains being assessed • Measurement range • Purpose of the CAT: to estimate a measure or classify persons into groups

  33. Comments • Even large item banks fail to provide adequate precision over the entire measure (though it can come close). • Bank size matters, but so does item quality and targeting of items to the intended population. • An important question is: • Where along the measurement continuum is precision most critical?

  34. Potential of CAT in Clinical Practice • Reduce respondent burden • Reduce staff resources • Reduce data fragmentation • Streamline complex assessment procedures

  35. Limitations of CAT • Expensive to develop and maintain • Reviewing/changing answers to previous items is usually not allowed, and when allowed can complicate CAT procedures.

  36. Recommended Readings • Wainer, H., Dorans, N.J., Flaugher, R., Green, B.F., Mislevy, R.J., Steinberg, L., & Thissen, D. (2000). Computerized Adaptive Testing: A Primer. New York: Lawrence Erlbaum. • van der Linden, W. & Glas C.A.W. (2000). Computerized adaptive testing: Theory and Practice. • Parshall, C.G., Spray, J.A., Kalohn, J.C., & Davey T. (2002). Practical Considerations in Computer-Based Testing. New York: Springer Verlag.

More Related