250 likes | 261 Views
This study compares traditional meta-analysis to individual patient data meta-analysis to evaluate the impact of selective reporting in studies of depression screening tools. It assesses whether selective reporting of data-driven cutoffs exaggerates accuracy, identifies predictable patterns, and explores the impact on sensitivity and specificity. The findings suggest that selective cutoff reporting can lead to exaggerated estimates of accuracy. The study also examines the transfer of heterogeneity from sensitivity to cutoff scores.
E N D
Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Comparing traditional meta-analysis to individual patient data meta-analysis Brooke Levis, MSc, PhD Candidate Jewish General Hospital and McGill University Montreal, Quebec, Canada
Does Selective Reporting of Data-driven Cutoffs Exaggerate Accuracy? The Hockey Analogy
What is Screening? • Purpose to identify otherwise unrecognisable disease • By sorting out apparently well persons who probably have a condition from those who probably do not • Not diagnostic • Positive tests require referral for diagnosis and, as appropriate, treatment • A program – of which a test is one component Illustration: This information was originally developed by the UK National Screening Committee/NHS Screening Programmes (www.screening.nhs.uk) and is used under the Open Government Licence v1.0
The Patient Health Questionnaire (PHQ-9) depression screening tool • Patient Health Questionnaire (PHQ-9) • Depression screening tool • Scores range from 0 to 27 • Higher scores = more severe symptoms
Selective Reporting of Results Using Data-Driven Cutoffs • Extreme scenarios: • Cutoff of ≥ 0 • All subjects above cutoff • sensitivity = 100% • Cutoff of ≥ 27 • All subjects below cutoff • specificity = 100%
Does Selecting Reporting of Data-driven Cutoffs Exaggerate Accuracy? • Sensitivity increases from cutoff of 8 to cutoff of 11 • For standard cutoff of 10, missing 897 cases (13%) • For cutoffs of 7-9 and 11, missing 52-58% of data Manea et al., CMAJ, 2012
Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?
Methods • Data source: • Studies included in published traditional meta-analysis on the diagnostic accuracy of the PHQ-9. (Manea et al, CMAJ 2012) • Inclusion criteria: • Unique patient sample • Published diagnostic accuracy for MDD for at least one PHQ-9 cutoff • Data transfer: • Invited authors of the eligible studies to contribute their original patient data (de-identified) • Received data from 13 of 16 eligible datasets (80% of patients, 94% of MDD cases)
Methods • Data preparation • For each dataset, extracted PHQ-9 scores and MDD diagnostic status for each patient, and information pertaining to weighting • Statistical analyses (2 sets performed) • Traditional meta-analysis • For each cutoff between 7 and 15, included data from the studies that reported accuracy results for the respective cutoff in the original publication • IPD meta-analysis • For each cutoff between 7 and 15, included data from all studies
Methods • Model: Bivariate random-effects* meta-analysis models • Models sensitivity and specificity at the same time • Accounts for clustering by study • Provides an overall pooled sensitivity and specificity for each cutoff, for the 2 sets of analyses • Within each set of analyses, each cutoff requires its own model • Estimates between study heterogeneity • Note:model accounts for correlation between sensitivity and specificity at each threshold, but not for correlation of parameters across thresholds *Random effects model: sensitivity & specificity assumed to vary across primary studies
Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?
Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?
Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?
Why Sensitivity Changes with Moving Cutoffs, but Not Specificity
Questions • Does selective cutoff reporting lead to exaggerated estimates of accuracy? • Can we identify predictable patterns of selective cutoff reporting? • Why does selective cutoff reporting appear to impact sensitivity, but not specificity? • Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates?
Summary • Selective cutoff reporting in depression screening tool DTA studies may distort accuracy across cutoffs. • It will lead to exaggerated estimates of accuracy. • These distortions were relatively minor in the PHQ, but would likely be much larger for other measures where standard cutoffs are less consistently reported and more data-driven reporting seems to occur (e.g., HADS). • IPD meta-analysis can address this and will allow subgroup-based accuracy evaluation.
Summary • STARD undergoing revision: • Needs to require precision-based sample size calculation to avoid very small samples – particularly number of cases – and unstable estimates • Needs to require reporting of spectrum of cutoffs, which is easily done with online appendices
Acknowledgements DEPRESSD Investigators • Brett Thombs • Andrea Benedetti • Roy Ziegelstein • PimCuijpers • Simon Gilbody • John Ioannidis • Alex Levis • Danielle Rice • Scott Patten • Dean McMillan • Ian Shrier • Russell Steele • Lorie Kloda Other Contributors