1 / 32

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2007. Examples of Recent Test Development by KC Investigators. Peabody Two different tests of school-based reading ability A test of school-based math skills

lindley
Download Presentation

Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Choose the best items: A basic psychometric toolkit for testmakers Warren Lambert Vanderbilt Kennedy Center February 2007

  2. Examples of Recent Test Development by KC Investigators Peabody Two different tests of school-based reading ability A test of school-based math skills A battery of 10 new tests designed to track ongoing mental health treatment of children Very early signs of autism spectrum in infants VUMC Psychological rigidity in children with behavior/emotional problems Somatizing in children with recurrent abdominal pain Survey of attending MD satisfaction with a department in hospital

  3. What Is a “Test” Could be questionnaire A set of items in a structured interview Signs & symptoms of something Often a “fuzzy” construct with numerous imperfect indicators, e.g. Beck Depression Inventory or CBCL Tests gain reliability by combining imperfect items into a total score For today: A test is a set of items that produces a total score

  4. How to Identify the Best ItemsA toolkit, not an analytic plan Actually, flag weaker items to drop or revise Identify the weaker Relative, not absolute criteria Classical test theory Floors or ceilings restrict variance To increase Cronbach’s alpha, avoid low item-total correlations Guesstimate test length with Spearman-Brown formula Factor analysis (exploratory and confirmatory) Are the items that don’t fit the construct? Avoid items that do not load on the main factor See how well a confirmatory model fits Rasch modeling Pick items that fit a carefully considered measurement model Consider item difficulties more deeply Pick items that suit the intended task

  5. Psychometrics vs Statistics Statistics: Look for a statistical model that fits your data Psychometric test construction: Look for data that fits your statistical model Choose sound measurement models and pick items that fit the model

  6. Unresolved Issues for Discussion Role of confirmatory factor analysis? Other approaches?

  7. Classical Test Theory (CTT) Basic description of items Can be done with SAS SPSS etc Do this routinely with scales old or new

  8. Note Floors or Ceilings The “Too Short” IQ Test (TS-IQ) Mean, SD, variance all indicate floors or ceilings, but kurtosis is very easy to spot. The “Too Short” IQ Test data set with SAS and SPSS code available for download http://kc.vanderbilt.edu/quant/Seminar/schedule.htm

  9. Hard, Medium, Easy Items#1, #6, #10 Measuring entire population requires a range of item difficulties. Kurtosis: 11 -2 3

  10. Use Excel Conditional Formatting to Flag Notable Values

  11. Retain Flagged Estimates of Quality“Too Short IQ Test” (TS-IQ)

  12. Item-total Correlations If an item is uncorrelated with the other items, it doesn’t contribute to internal-consistency reliability Software packages like SAS SPSS etc will do these easily

  13. Biological Age IndexNegative Item-Total Correlations Are BadForgot to “flip” items on left Make sure all items are high-is-good or high-is-bad

  14. “TS-IQ,” Low Item-Total r’s are BadSPSS Reliability or SAS PROC CORR

  15. “Too Short” Item-Total Correlations(See SAS and SPSS Code in Handout) • Items with nothing in common would not have a reliable total score • Cronbach’s alpha internal consistency reliability • Reliability increases with high item-total correlations

  16. How Many Items?Spearman-Brown’s Predicted Reliability = F(N Items) Classical Test Theory: Reliability increases with the number of items Put the the S-B formula into Excel to see approximately how many items you need for desired reliability under CTT. Brown, W. (1910). Some experimental results in the correlation of mental abilities. British Journal of Psychology, 3, 296-322. Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 171-195.

  17. Factor Analysis (FA) We want to produce one or more single-factor tests Use EFA (exploratory factor analysis) and CFA (confirmatory factor analysis)

  18. Scree Plotof “Too Short IQ Test” • Run a principle components analysis with SAS, SPSS etc • “Scree” plot of eigenvalues • Cattell’s metaphor, a mountain with useless rubble • More than one big component? • Hard to get multiple factors (subtests) from the “Too short IQ test” • Kaiser criterion, min eigenvalue > 1 extremely liberal, makes unstable factors

  19. Exploratory Factor Analysis:VUMC researcher dropped items with low loadings on Factor I Started with 50 items, picked the best 17 Learning sample ≈ 330 Validation sample ≈ 350

  20. VUMC Researcher’s Three Samples • N = 181 children rating understandability of items • N = 680 Psychometric sample • 2a. Random 50% exploratory sample (CTT, EFA) • 2b. Random 50% confirmatory sample (CTT, CFA)

  21. “Too Short IQ” SAS CFA of single-factor measurement model Warning: So far, most VU tests early in their development haven’t met the high standards for measurement model fit. RMSEA < .05, CFI > 0.95 or 0.96 (very high standards of unidimensionality)

  22. Measure score for person and item in same units If you’re better than the item, p (right) > 50% Rasch-IRT Model As (Person – Item) increases, prob (right) increases in logistic model.

  23. WINSTEPSOne-parameter Rasch program(see http://www.winsteps.com)$200 ($99 on summer sale)

  24. Persons & Items on One Scale Rasch model measures each item and each person on the same scale Concentrate your items where they are needed Measure everyone Measure high clinical cases most efficiently TS-IQ measures across a wide range

  25. “TS IQ” Items Information Spread Across Whole Range Easy items, like #10, are most informative about low scoring individuals Hard items, like #1, are most informative about high scoring individuals. This test’s items spread to describe whole range of IQs

  26. High is bad (sicker) Clinical screens focus on sick people Classify: treat yes-no Job is to be maximally informative at the cutpoint This test invests its items in severe range VUMC Clinical Test Focuses on CutpointUnlike the TS-IQ

  27. Rating Scale Model for Likert ScalesSeparate estimates for Never, sometimes . . . TS-IQ is right wrong, but Rasch and IRT handle rating scales, such as Likert scales Construct measured can be anything, e.g. depression, not just ability

  28. Putting It All TogetherItems in the TS-IQ

  29. Putting It All Together Many items near the floor The lowest few have excessive kurtosis However Item-total rs and Rasch fit stats are generally OK Test maker can shorten this with considerable latitude, e.g. with content analysis.

  30. Putting It All Together Last item has poor fit to Rasch model, consider dropping or revising

  31. Putting It All Together Test has one odd item that measures something else, drop or revise that item.

  32. How to Identify the Best ItemsA toolkit not an analytic plan Actually, flag weaker items to drop or revise Identify the weaker Relative, not absolute criteria Classical test theory Floors or ceilings restrict variance To increase Cronbach’s alpha, avoid low item-total correlations Spearman-Brown test length Factor analysis Are the items reasonably unidimensional? Avoid items that do not load on the main factor See how well a confirmatory model fits Rasch modeling Pick items that fit a carefully considered measurement model Consider item difficulties more deeply Pick items that suit the intended task

More Related