940 likes | 1.1k Views
Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules. Michael A. Kohn, MD, MPP 10/29/2009. Outline of Topics. Prognostic Tests Differences from diagnostic tests Quantifying prediction: calibration and discrimination Value of prognostic information
E N D
Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules Michael A. Kohn, MD, MPP 10/29/2009
Outline of Topics • Prognostic Tests • Differences from diagnostic tests • Quantifying prediction: calibration and discrimination • Value of prognostic information • Comparing predictions • Combining Tests/Diagnostic Models • Importance of test non-independence • Recursive Partitioning • Logistic Regression • Variable (Test) Selection • Importance of validation separate from derivation
Prognostic Tests (Ch 7)* Differences from diagnostic tests Validation/Quantifying Accuracy (calibration and discrimination) Assessing the value of prognostic information Comparing predictions by different people or different models *Will not discuss time-to-event analysis or predicting continuous outcomes. (Covered in Chapter 7.)
Chance determines whether you get the disease Spin the needle
Diagnostic Test • Spin needle to see if you develop disease. • Perform test for disease. • Gold standard determines true disease state. (Can calculate sensitivity, specificity, LRs.)
Prognostic Test • Perform test to predict the risk of disease. • Spin needle to see if you develop disease. • How do you assess the validity of the predictions?
Example: Mastate Cancer Once developed, always fatal. Can be prevented by mastatectomy. Two oncologists separately assign each of N individuals a risk for developing mastate cancer in the next 5 years.
How many like this? Oncologist 1 assigns risk of 50% Spin the needles. How many get mastate cancer?
How many like this? Oncologist 1 assigns risk of 35% Spin the needles. How many get mastate cancer?
How many like this? Oncologist 1 assigns risk of 20% Spin the needles. How many get mastate cancer?
How accurate are the predicted probabilities? Break the population into groups Compare actual and predicted probabilities for each group Calibration* *Related to Goodness-of-Fit and diagnostic model validation, which will be discussed shortly.
How well can the test separate subjects in the population from the mean probability to values closer to zero or 1? May be more generalizable Often measured with C-statistic (AUROC) Discrimination
Discrimination AUROC = 0.63
True Risk Oncologist 1: 20% Oncologist 2: 20% True Risk: 11.1% Oncologist 1: 35% Oncologist 2: 20% True Risk: 16.7% Oncologist 1: 50% Oncologist 2: 20% True Risk: 33.3%
True Risk -- Discrimination AUROC = 0.63
Random event occurs AFTER prognostic test. 1) Perform test to predict the risk of disease. 2) Spin needle to see if you develop disease. Only crystal ball allows perfect prediction.
Maximum AUROC True Risk: 11.1% True Risk: 16.7% True Risk: 33.3% Maximum AUROC = 0.65
Diagnostic versus Prognostic Tests Identify Prevalent Disease Predict Incident Disease/Outcome Prior to Test After Test Cross-Sectional Cohort +/-, ordinal, continuous Risk (Probability) 1 <1 (not clairvoyant)
Value of Prognostic Information Why do you want to know risk of mastate cancer? To decide whether to do a mastatectomy.
Value of Prognostic Information • It is 4 times worse to die of mastate gland cancer than to have a mastatectomy. • B + C = 4C • Ptt = C/(B+C) = C/4C = 0.25 = 25%
Value of Prognostic Information300 patients (100 per risk group) Oncologist 1: 31% > 25% Mastatectomy 89 out of 100 unnecessary; no mastate cancer deaths Oncologist 1: 37% > 25% Mastatectomy 83 out of 100 unnecessary; no mastate cancer deaths Oncologist 1: 53% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths
Value of Prognostic Information300 patients (100 per risk group) Oncologist 2: 20% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 33 out of 100 die; no mastatectomies
Value of Prognostic Information300 patients (100 per risk group) True Risk: 11% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies True Risk: 17% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies True Risk: 33% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths
Value of Prognostic Information300 patients (100 per risk group)
Doctors and patients like prognostic information But hard to assess its value Most objective approach is decision-analytic. Consider: What decision is to be made? Costs of errors? Cost of test? Value of Prognostic Information
Comparing Predictions • Compare ROC Curves and AUROCs • Reclassification Tables*, Net Reclassification Improvement (NRI), Integrated Discrimination Improvement (IDI) • See Jan. 30, 2008 Issue of Statistics in Medicine* (? and EBD Edition 2 ?) *Pencina et al. Stat Med. 2008 Jan 30;27(2):157-72;
Common Problems with Studies of Prognostic Tests See Chapter 7
Combining Tests/Diagnostic Models • Importance of test non-independence • Recursive Partitioning • Logistic Regression • Variable (Test) Selection • Importance of validation separate from derivation (calibration and discrimination revisited)
Combining TestsExample Prenatal sonographic Nuchal Translucency (NT) and Nasal Bone Exam as dichotomous tests for Trisomy 21* *Cicero, S., G. Rembouskos, et al. (2004). "Likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11-14-week scan." Ultrasound Obstet Gynecol23(3): 218-23.
If NT ≥ 3.5 mm Positive for Trisomy 21* *What’s wrong with this definition?
In general, don’t make multi-level tests like NT into dichotomous tests by choosing a fixed cutoff • I did it here to make the discussion of multiple tests easier • I arbitrarily chose to call ≥ 3.5 mm positive
One Dichotomous Test Trisomy 21 Nuchal D+ D- LR Translucency ≥ 3.5 mm 212 478 7.0 < 3.5 mm 121 4745 0.4 Total 333 5223 Do you see that this is (212/333)/(478/5223)? Review of Chapter 3: What are the sensitivity, specificity, PPV, and NPV of this test? (Be careful.)
Nuchal Translucency • Sensitivity = 212/333 = 64% • Specificity = 4745/5223 = 91% • Prevalence = 333/(333+5223) = 6% (Study population: pregnant women about to undergo CVS, so high prevalence of Trisomy 21) PPV = 212/(212 + 478) = 31% NPV = 4745/(121 + 4745) = 97.5%* * Not that great; prior to test P(D-) = 94%
Clinical Scenario – One TestPre-Test Probability of Down’s = 6%NT Positive Pre-test prob: 0.06 Pre-test odds: 0.06/0.94 = 0.064 LR(+) = 7.0 Post-Test Odds = Pre-Test Odds x LR(+) = 0.064 x 7.0 = 0.44 Post-Test prob = 0.44/(0.44 + 1) = 0.31
NT Positive • Pre-test Prob = 0.06 • P(Result|Trisomy 21) = 0.64 • P(Result|No Trisomy 21) = 0.09 • Post-Test Prob = ? http://www.quesgen.com/Calculators/PostProdOfDisease/PostProdOfDisease.html Slide Rule
Nasal Bone Seen NBA=“No” Neg for Trisomy 21 Nasal Bone Absent NBA=“Yes” Pos for Trisomy 21
Second Dichotomous Test Nasal Bone Tri21+ Tri21- LR Absent Yes 229 129 27.8 No 104 5094 0.32 Total 333 5223 Do you see that this is (229/333)/(129/5223)?
Pre-Test Probability of Trisomy 21 = 6%NT Positive for Trisomy 21 (≥ 3.5 mm)Post-NT Probability of Trisomy 21 = 31%NBA Positive (no bone seen)Post-NBA Probability of Trisomy 21 = ? Clinical Scenario –Two Tests Using Probabilities