610 likes | 630 Views
This resource delves into the background of screening tests, ROC curves, and strategies for shortening tests. It discusses the importance of effective screening and considerations for applying tests. Explore the validity of screening protocols and the influence of educational quality on test results. Discover the complexities of disease probability, treatment decisions, and the impact of testing on management strategies.
E N D
Trimming screening tests and modern psychometrics Paul K. Crane, MD MPH General Internal Medicine University of Washington
Outline • Background on screening tests • 2x2 tables • ROC curves • Consideration of strategies for shortening tests • A word or two on testlets
Purposes of measurement • Discriminative (e.g. screening) • Evaluative (e.g. longitudinal analyses) • Predictive (e.g. prognostication) • After Kirshner and Guyatt (1985)
Diagnostic medicine as a Bayesian task Disease probability treat test Don’t test
Diagnostic medicine as a Bayesian task Disease probability treat 3 1 2 test Don’t test
Screening tests are the same – only different • Screening implies applying the test to asymptomatic individuals in whom there is no specific reason to suspect the disease • In the previous slides, the test/don’t test threshold of 0 • Often result of screening test is need for further testing rather than a specific diagnosis • Need for biopsy rather than need for chemotherapy
Screening tests Disease probability treat test further screen (don’t test)
Rationale for screening • Screening tests should be applied when they may make a difference • Effect on management • Some difference in outcome between disease detected in asymptomatic people as opposed to disease detected in symptomatic patients • (note people vs. patients) • If no difference in outcome, no benefit from expenditures on screening • Implies a disease model of worsening disease in which an intervention early prevents subsequent badness
Screening isn’t always a good idea • Lung cancer in smokers with CT scans (disease grows too fast) • Liver cancer with CEA in Hep C patients (yield too low, false negatives too common – test isn’t accurate enough) • Breast cancer in young women (disease is less prevalent but more aggressive, breast biology leads to higher false-positive rates in young women, which in combination lead to unacceptable morbidity for a negligible mortality benefit)
What about dementia? • No DMARD equivalents (so far); marginal benefit to early detection • Planning, QOL decisions, etc. • Potential harm in early detection? • There are those who advocate population-based screening now (Borson 2004) • USPSTF says evidence insufficient to recommend for or against screening • Spiegel letter to editor (2007); Brodaty paper (2006) • Primary purpose is research
What about CIND / MCI? • Even less rationale for population-based screening • In several studies, while patients with MCI have an increased risk of progressing to AD, for any individual with MCI their risk for reverting to normal is higher than their risk for AD. • No intervention known to reduce rates of conversion • Again, research rationale
Research rationale • Parameter of interest is the rate of disease in the general population (or other denominator) • Most valid way: gold standard test applied to entire population • Chicago study, ROS, some others • Problems: expensive • General idea: apply a screening test / strategy to determine who should receive gold standard eval • Most of the epidemiological studies of cognitive functioning use this strategy
2-stage sampling • 1st stage: everyone in enumerated sample receives a screening test/strategy • 2nd stage: some decision rule is applied to the 1st stage results to identify people who receive further evaluations to definitively rule-in or rule-out disease • Analysis: disease status from 2nd stage extrapolated back to the underlying sample from the 1st stage
Variations on a theme • Simplest: single cut-point, no sampling over the cutpoint • EURODEM, ACT • Slight elaboration: single cut-point, 100% below, small % above • Can address possibility of false negatives • CSHA • Fancier still: age/education adjusted cutpoints, sampling (Cache County) (also case-cohort design, which is even more fancy)
Validity of the screening protocol • Imagine an epidemiological risk factor study • Risk factor is correlated with educational quality (e.g. smoking, obesity, untreated hypertension…) • Educational quality is associated with DIF on the screening test • People with lower education have lower scores for a given degree of actual cognitive deficit • Borderline people with higher education more likely to escape detection by the screening test; ignored by the study • Rates extrapolated back: biased study
DIF in screening tests • DIF thus becomes a key feature for validity of epidemiological investigations of studies that employ 2-stage sampling designs • Crane et al. Int Psychogeriatr 2006; 18: 505-15. • Overwhelmingly ignored in the literature • Entire session on epidemiological studies of HTN at VasCOG 2007 in which education and SES were not mentioned at all • Not really the focus this year, but could be an important feature of validation
Test accuracy in 2-stage sampling • Begg and Greenes. Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics. 1983;39:207-215 (web site) • Straight-forward way to extend back to the original population
Quality of papers on diagnostic tests • STARD initiative. Ann Int Med 2003 (web site) • Provides a guideline for high-quality articles about diagnostic or screening tests • We should play by these rules • There is a checklist (p. 42) and a flow chart (p. 43; next slide) • Nothing too surprising • Reviews on quality of papers about diagnostic and screening tests: quality is terrible.
Summaries of 2x2 tables: SN, SP • Sensitivity • TP/diseased • Proportion of those with disease caught by the test • Specificity • TN/non-diseased • Proportion of those who truly don’t have the disease correctly identified by the test
Summaries of 2x2 tables: LR • Pos LR • TP/Test positives. • Proportion of those with a positive test who actually have the disease • Neg LR • TN/Test negatives • Proportion of those with a negative test who actually don’t have the disease
SPIN, SNOUT • Need a (positive result on a) SPecific test to rule something IN • Need a (negative result on a) SENsitive test to rule something OUT • Decent rule of thumb but doesn’t apply pre-test probabilities
ROC curves • "Signal Dectection Theory" • World War II -- analysis of radar images • Radar operators had to decide whether a blip on the screen represented an enemy target, a friendly ship, or just noise • Signal detection theory measured the ability of radar receiver operators to do this, hence the name Receiver Operator Characteristics • In the 1970's signal detection theory recognized as useful for interpreting medical test results http://gim.unmc.edu/dxtests/roc3.htm
ROC basics • ROC curves plot • sensitivity vs. (1-specificity) • (the true positive rate vs. the false negative rate) • at each possible cutpoint • Useful for visualizing the impact of various potential cutoff points on a continuous measure (continuous binary) • Economic decision on cutpoint; no single right answer
Limitations of ROC curves • Not intended to help with choosing particular items or for improving tests • Doesn’t tell us which parts of the test (items) are helpful in the region of interest • Doesn’t help us in combining the best items from several tests
ROC curve for dementia from ACT Area under ROC curve = 0.9105 31 30 50 48 32 33 34 1.00 35 36 37 38 39 40 41 42 43 44 45 46 0.75 47 48 49 50 51 0.50 52 53 54 55 56 0.25 57 58 59 60 61 62 0.00 0.00 0.25 0.50 0.75 1.00 1 - Specificity
Optimality from an ROC curve • Always tradeoffs between sensitivity and specificity • Also numbers of people who need to be evaluated with the gold standard test (number of individuals who will screen positive) • Optimal point depends on consequences of missing cases (false negatives), costs of working up false positives • Breast cancer: 10:1 for sufficient sensitivity
What about normal/CIND/dementia? • Chengjie Xiong: ROC surface (Stat Med 2006; 25:1251-1273) • May have the same issues in terms of tradeoffs • Does missing a case of CIND have the same impact as missing a case of dementia? • Should we try to use the same tool to do both tasks? Dementia/normal is an easier target than CIND/normal. Dementia/CIND is hard and primarily depends on whether deficits have a functional impact, which in turn is very hard to tease out
III. Shortening of psychometric tests: strategies used in the literature
Search strategy • “short*” • “psychometric test*” • #1 AND #2 • Convenience sample of resulting articles • One or two examples of each technique
CTT strategies • Bengtssen et al (2007): item:total correlations >0.80; missing>5% • Standard CTT approaches to limiting an item pool (also commonly see low item:total correlations excluded) • Doesn’t use disease status
Brute force strategies • Christensen et al. (2007) looked at all pairs of 2 tests for each subdomain and compared based on alpha and correlation with the subdomain (Psychological Assessment; WAIS-3 SF)
Regression strategies • Regress on total score (for evaluative tests) or use logistic regression approaches • Sapin et al (2004) used linear regression to predict a longer measure; nice series of validation steps including an independent validation sample • Eberhard-Gran et al (2007) used stepwise linear regression; no external validation • Problems: overfitting, ignores colinearity of items • Need a 2nd confirmatory sample and/or some bootstrapping approach for model optimism • Also need a modeling strategy: Forwards/backwards stepwise? Best subsets?
EFA strategies • Rosen et al. (2007) looked at loadings from EFA and chose items with the highest loadings • No use of external (disease status) information • Highest loadings (// to highest item discriminations) has nothing to do with item difficulty; may well end up selecting highly discriminatory items with no relevance to disease/no disease
CFA strategies • Bunn et al. (2007) used MPLUS: CFA on a new sample, modified paths and eliminated items to improve fit statistics • No independent sample confirmation • No disease status reference
IRT strategies • Gill et al. (2007) – Bayesian IRT but I can’t figure out how they reduced their scale • Beevers et al. (2007) used nonparametric IRT – single sample. If the items looked bad they threw them out. Psych Assessment • Both of these papers relied only on IRT parameters to reduce the scale, not anything external (e.g. disease status)
Combining IRT with external information • Combine item characteristic / information curves with some indicator of disease status • Paul’s old idea: ROC curves, identify region of interest, determine items with maximal information in that region • Rich’s new and simpler (and thus likely better) idea: box plots for diseased and non-diseased individuals superimposed on the ICCs/IICs • Takes advantage of the fact that item difficulty and person ability are on the same scale
Extensions of IRT / external information approaches • Targeted creation / addition of new items in particular regions of the theta scale seems like a reasonable strategy • We have only talked about fixed forms – CAT is a reasonable extension • CAT likely more relevant for evaluative tests • Could terminate early if results became clear – reduced burden for those not close to the threshold
Other strategies • Decision trees • A bit like PCA • Based entirely on relationships with disease • Random forests • Machine learning technique; extension of decision trees • Microarray and GWA applications • Jonathan Gruhl – expertise obtained since he first heard of this topic on Monday
General comments • Literature is pretty wide open • Seems like IRT provides some useful tools • IRT wedded to distribution of scores of diseased / non-diseased individuals seems like a good strategy • Machine learning tools are interesting • Ignore covariation between items, the theoryitem link • Hope to compare/contrast strategies with CSHA data
Rationale • IRT posits unidimensionality: a single underlying latent trait (domain) explains observed covariation between items • Various tools to address this assumption • Literature essentially always concludes that scales are sufficiently unidimensional to do what the investigator wanted to do in the first place • See JS Lai, D Cella, P Crane, “Factor analysis techniques for assessing sufficient unidimensionality of cancer related fatigue.” Qual Life Res 2006; 15: 1179-90.
Dimensionality of cognitive screening tests • Initial MMSE and 3MS papers do not mention different cognitive domains • Initial CASI paper (also by Evelyn Teng) describes 9 domains: • long-term memory, • orientation, • attention, • concentration, • short-term memory, • language, • visual construction, • fluency, • abstraction and judgment