1.08k likes | 1.28k Views
Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules. Michael A. Kohn, MD, MPP 10/25/2012. Outline of Topics. Prognostic Tests Differences from diagnostic tests Quantifying prediction: calibration and discrimination Value of prognostic information
E N D
Chapter 7 – Prognostic Tests Chapter 8 – Combining Tests and Multivariable Decision Rules Michael A. Kohn, MD, MPP 10/25/2012
Outline of Topics • Prognostic Tests • Differences from diagnostic tests • Quantifying prediction: calibration and discrimination • Value of prognostic information • Comparing predictions • Example: ABCD2 Score • Combining Tests/Diagnostic Models • Importance of test non-independence • Recursive Partitioning • Logistic Regression • Variable (Test) Selection • Importance of validation separate from derivation
Prognostic Tests (Ch 7)* Differences from diagnostic tests Validation/Quantifying Accuracy (calibration and discrimination) Assessing the value of prognostic information Comparing predictions by different people or different models *Will not discuss time-to-event analysis or predicting continuous outcomes. (Covered in Chapter 7.)
Diagnostic, Prognostic, and Screening Tests • Diagnostic tests for prevalent disease • Patient is sick (symptomatic). • Prognostic tests for incident outcomes • Patient may or may not be sick • Random events occur to patient after test which determine whether outcome ensues • Screening tests • Patient is not known to be sick • Diagnostic – for unrecognized symptomatic disease • Prognostic – for risk factors • Diagnostic/Prognostic – for pre-symptomatic disease
Prognostic vs. Diagnostic Tests • Prognostication ≠ Etiology • Risk factor • Causes the disease • Reducing it may prevent disease • Confounding is crucial issue in observational studies • Risk marker (i.e., prognostic factor) • Predicts the disease • Need not be concerned about unmeasured confounders • Not all risk markers are risk factors…(e.g., CRP)
Prognostic vs. Diagnostic Tests • How is a prognostic test different from a diagnostic test?
Chance determines whether you get the disease/outcome Spin the needle
Diagnostic Test • Spin needle to see if you develop disease. • Perform test for disease. • Gold standard determines true disease state. (Can calculate sensitivity, specificity, LRs.) • Time frame is cross-sectional
Prognostic Test • Perform test to predict the risk of disease/outcome. • Spin needle to see if you develop disease/outcome. • Time frame is longitudinal (cohort) • How do you assess the validity of the predictions?
Example: Mastate Cancer Once developed, always fatal. Can be prevented by mastatectomy. Two oncologists separately assign each of N individuals a risk for developing mastate cancer in the next 5 years.
N = 100 Oncologist 1 assigns risk of 50% Oncologist 2 assigns risk of 20% Spin the needles. 33 get mastate cancer
100 like this Oncologist 1 assigns risk of 35% Oncologist 2 assigns risk of 20% Spin the needles. 17 get mastate cancer
100 like this Oncologist 1 assigns risk of 20% Oncologist 2 assigns risk of 20% Spin the needles. 11 get mastate cancer
How accurate are the predicted probabilities? Break the population into groups Compare actual and predicted probabilities for each group Calibration* *Related to Goodness-of-Fit and diagnostic model validation.
Calibration – Bland-Altman Plot Mean Bias = -14.7% (Predicted risk systematically high) SD of Errors = 0.04
Assessing/Quantifying Calibration • Eyeball / Gestalt • Bland-Altman Calibration Plots with Mean Bias and SD of Errors • Goodness of Fit Tests (e.g. Hosmer and Lemeshow)
Assessing/Quantifying Calibration • Eyeball / Gestalt • Bland-Altman Calibration Plots with Mean Bias and SD of Errors • Goodness of Fit Tests (e.g. Hosmer and Lemeshow)
How well can the test separate subjects in the population from the mean probability to values closer to zero or 1? May be more generalizable Often measured with C-statistic (AUROC) Discrimination
Discrimination (Oncologist 1) AUROC = 0.65
True Risk Oncologist 1: 20% Oncologist 2: 20% True Risk: 11.1% Oncologist 1: 35% Oncologist 2: 20% True Risk: 16.7% Oncologist 1: 50% Oncologist 2: 20% True Risk: 33.3%
True Risk -- Discrimination AUROC = 0.65
Random event occurs AFTER prognostic test. 1) Perform test to predict the risk of disease/outcome. (Future disease status remains to be determined by stochastic process*.) 2) Spin needle to see if you develop disease/outcome. Only crystal ball allows perfect prediction. (Maximum AUROC depends on underlying risk distribution.) *Cook NR. Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation. 2007 Feb 20;115(7):928-35.
Maximum AUROC True Risk: 11.1% True Risk: 16.7% True Risk: 33.3% Frequency: 0.33 Frequency: 0.33 Frequency: 0.33 Maximum AUROC = 0.65
Maximum AUROC Depends on Underlying Risk Distribution Maximum AUROC = 0.65
Maximum AUROC Depends on Underlying Risk Distribution Cook NR. Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation. 2007 Feb 20;115(7):928-35.
Diagnostic versus Prognostic Tests Identify Prevalent Disease Predict Incident Disease/Outcome Prior to Test After Test Cross-Sectional Cohort +/-, ordinal, continuous Translates to risk of having disease Translates to risk of developing disease/outcome <1 (not clairvoyant) 1 (gold standard)
Value of Prognostic Information Why do you want to know risk of mastate cancer? To decide whether to do a mastatectomy.
Value of Prognostic Information • It is 3 times worse to die of mastate gland cancer than to have an unnecessary mastatectomy • B = 3C Should do mastatectomy when P >C/(C+B) P > C/(C+3C) P > ¼ = 25%
Value of Prognostic Information300 patients (100 per risk group) Oncologist 1: 20% < 25% NO Mastatectomy 11 out of 100 die of mastate cancer, no mastatectomies Oncologist 1: 35% > 25% Mastatectomy 83 out of 100 unnecessary; no mastate cancer deaths Oncologist 1: 50% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths
Value of Prognostic Information300 patients (100 per risk group) Oncologist 2: 20% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies Oncologist 2: 20% < 25% No Mastatectomy 33 out of 100 die; no mastatectomies
Value of Prognostic Information300 patients (100 per risk group) True Risk: 11% < 25% No Mastatectomy 11 out of 100 die of mastate cancer; no mastatectomies True Risk: 17% < 25% No Mastatectomy 17 out of 100 die; no mastatectomies True Risk: 33% > 25% Mastatectomy 67 out of 100 unnecessary; no mastate cancer deaths
Value of Prognostic Information300 patients (100 per risk group)
Comparing Predictions • Decision to be made? (Mastatectomy) • Relative Costs of Errors? (C:B = 1:3) • Cost of Test? (Zero) • True Risk? (1/3; 1/6; 1/9)
Comparing Predictions Compare ROC Curves and AUROCs Reclassification Tables*, Net Reclassification Improvement (NRI), Integrated Discrimination Improvement (IDI) *Pencina et al. Stat Med. 2008 Jan 30;27(2):157-72;
Best approach is decision-analytic. Consider: What decision is to be made? Costs of errors? Cost of test? True risk? Value of Prognostic Information
Comparing Predictions • Identify cohort. • Obtain predictions (or information necessary for prediction) at inception. • Provide uniform treatment to cohort or at least treat independent of (blinded to) prediction. • Determine outcomes. • Scenario: What would have happened if treatment were based on predicted risk? Would net costs have been lower with Risk Prediction 1 or Risk Prediction 2?
If Clinical Treatment Is Based On Predicted Risk If treatment is effective and higher risk patients are treated more aggressively: Discrimination is DECREASED If highest risk patients are considered futile and support is withdrawn: Discrimination is INCREASED Especially relevant if outcome is something like 7-day in-hospital survival