650 likes | 823 Views
The clinical value of diagnostic tests A well-explored but underdeveloped continent. J. Hilden : March 2006. The clinical value of diagnostic tests The diagnostic test and some neglected aspects of its statistical evaluation.
E N D
The clinical value of diagnostic testsA well-explored but underdeveloped continent J. Hilden : March 2006
The clinical value of diagnostic testsThe diagnostic test and some neglected aspects of its statistical evaluation. Some aspects were covered in my seminar spring 2003
Plan of my talk Historical & ”sociological” observations Clinicometric framework Displays and measures of diagnostic power Appendix: math. peculiarities
Plan of my talk Historical & ”sociological” observations ”Skud & vildskud” - Diagnostic vs. therapeutic research • 3 key innovations & some pitfalls Clinicometric framework Displays and measures of diagnostic power Appendix: math. peculiarities
Trials concern what happens observably …concern 1st order entities (mean effects) Diagnostic activities aim at changing the doc’s mind …concern 2nd order entities (uncertainty / “entropy” change) A quantitative framework for diagnostics is much harder to devise than for therapeutic trials. CONSORT >> 10yrs >> STARD CC ~1993 CC ~2003
In the 1970s medical decision theory established itself – but few first-rate statisticians took notice. Were they preoccupied with other topics, … Cox, prognosis, … trial follow-up ? Sophisticated models became available for describing courses of disease conditionally on diagnostic data. Fair to say that they themselves remained ‘a vector of covariates’ ?
Early history Yerushalmy ~1947: studies of observer variation* Vecchio: /:BLACK~WHITE:/ Model 1966 - simplistic but indispensable - simple yet often misunderstood?! Warner ~1960: congenital heart dis. via BFCI * important but not part of my topic today
Other topics not mentioned Location (anatomical diagnoses) and multiple lesions Monitoring, repeated events, prognosis Systematic reviews & meta-analyses Interplay between diagnostic test data & knowledge from e.g. physiology Tests with a therapeutic potential Non-existence of ”prevalence-free” figures of merit Patient involvement, consent
BFCI (Bayes’ Formula w. Conditional Independence**) ”based on the assumption of CI”: what does that mean? Do you see why it was misunderstood? ** Indicant variables independent cond’lly on pt’s true condition
BFCI (Bayes’ Formula w. Conditional Independence) ”Bayes based on the assumption of CI” - what does that mean? • ”There is no ”Bayes Theorem” without CI” • ”The BFCI formulae presuppose CI (CI is a necessary condition for correctness)” No, CI is a sufficient condition; whether it is also necessary is a matter to be determined – and the answer is No. Counterexample: next picture !
Joint conditional distribution of two tests* in two diseases (red, green) *with 3 and 2 test qualitative outcomes
Vecchio’s /:BLACK&WHITE:/ Model 1966 Common misunderstandings: • ”The sensitivity and specificity are properties of the diagnostic test [rather than of the patient population]” • They are closely connected with the ability of the test to rule out & in True only when the ”prevalence” is intermediate
Plan of my talk Historical & ”sociological” observations Clinicometric framework Displays and measures of diagnostic power Appendix: math. peculiarities
You cannot discuss Diagnostic Tests without:Some conceptual framework* A Case, the unit of experience in the clinical disciplines, is a case of a Clinical Problem, defined by the who-how-where-why-what of a clinical encounter – or Decision Task. We have a case population or: case stream (composition!) with a case flow (rate, intensity). *Clini[co]metrics, rationel klinik, …
Examples Each time the doc sees the patient we have a new encounter / case, to be compared with suitable ”statistical” precedents – and physio- & pharmacology. Prognostic outlook at discharge from hospital: a population of cases = discharges, not patients (CPR Nos.). Danish Citizen No.
Diagnosis? Serious diagnostic endeavours are always action-oriented – or at least counselling-oriented – i.e., towards what should be done so as to influence the future (action-conditional prognosis). The ”truth” is either • a gold standard test (”facitliste”), or • syndromatic (when tests define the ”disease,*” e.g. rheum. syndromes, diabetes) *in clinimetrics there is little need for that word!
Example The acute abdomen: there is no need to discriminate between appendicitis and non-app. (though it is fun to run an ”appendicitis contest”) What is actionwise relevant is the decision: open up or wait-and-see? <This is frequently not recognized in the literature>
In clinical studies the choice of sample, and of the variables on which to base one's prediction, must match the clinical problem as it presents itself at the time of decision making. In particular, one mustn't discard subgroups (impurities?) that did not become identifiable until later: prospective recognizability ! Data collection
Purity vs. representativeness:A meticulously filtered case stream ('proven infarctions') may be needed for patho- and pharmaco-physiological research, but is inappropriate as a basis for clinical decision rules [incl. cost studies]. Data collection
Consecutivity as a safeguard against selection bias.Standardization: (Who examines the patient? Where? When? With access to clin. data?)Gold standard … the big problem !! w. blinding, etc.Safeguards against change of data after the fact. Data collection STARD !
”Discrepant analysis” If the outcome is FALSE negative or positive, you apply an ”arbiter” test ”in order to resolve the discrepant finding,” i.e. a 2nd, 3rd, … reference test. If TRUE negative or positive, accept ! ~ The defendant decides who shall be allowed to give testimony and when
Digression… Randomized trials of diagn. tests …theory under development Purpose & design: many variants Sub(-set-)randomization, depending on the pt.’s data so far collected. ”Non-disclosure”: some data are kept under seal until analysis. No parallel in therapeutic trials! Main purposes…
…Randomized trials of diagn. tests • when the diagnostic intervention is itself potentially therapeutic; • when the new test is likely to redefine the disease(s) ( cutting the cake in a completely new way ); • when there is no obvious rule of translation from the outcomes of the new test to existing treatment guidelines; 4) when clinician behaviour is part of the research question… …end of digression
Plan of my talk Historical & ”sociological” observations Clinicometric framework Displays and measures of diagnostic power Appendix: math. peculiarities
Displays & measures of diagnostic power • The Schism – between: • ROCography • VOIography
ROCography ~ classical discriminant analysis / pattern recognition Focus on disease-conditional distribution of test results (e.g., ROC) AuROC (the area under the ROC) is popular … despite 1991 paper
VOI (value of information) ~ decision theory. VOI = increase in expected utility afforded by an information source such as a diagnostic test Focus on posttest conditional distribution of disorders, range of actions and the associated expected utility – and – its preposterior quantification. Less concerned with math structure, more with medical realism.
VOI Do we have a canonical guideline? 1) UTILITY 2) UTILITY / COST Even if we don't have the utilities as actual numbers, we can use this paradigm as a filter: evaluation methods that violate it are wasteful of lives or resources. Stylized utility (pseudo-regret functions) as a (math. convenient) substitute.
VOI Def. diagnostic uncertainty as expected regret (utility loss, relative to if you knew what ailed the pt.) Diagnosticity measures (DMs): Diagnostic tests should be evaluated in terms of pretest-posttest difference in diagnostic uncertainty. Auxiliary quantities like sens and spec … go into the above. …so much as to VOIprinciples
NOT Diagnosticity measures and auxiliary quantities BLACK&WHITE
Diagnosticity measures and auxiliary quantities Sens (TP), spec (TN): nosografic distrib. PVpos, Pvneg: diagnostic distr.|test result Youden’s Index: Y = sens + spec – 1 = 1 – (FN) – (FP) = det(nosog. 2X2) = (TP)(TN)–(FP)(FN) = 2(AuROC – ½) AuROC = [sens+spec] / 2 ROC Y = 1 FN TP Y = 0 BLACK&WHITE FP TN
Diagnosticity measures and auxiliary quantities Sens, spec nosografic distribution LRpos, LRneg = slopes of segments The ”Likelihood ratio” term is o.k. when diagnostic hypotheses are likened to scientific hypotheses ROC Y = 1 FN neg pos TP Y = 0 BLACK&WHITE FP TN
Diagnosticity measures and auxiliary quantities «Utility index» = (sens) xY. ... is nonensense ROC Y = 1 FN TP Y = 0 BLACK&WHITE FP TN
Diagnosticity measures and auxiliary quantities DOR (diagnostic odds ratio) = [(TP)(TN)] / [(FP)(FN)] = infinity in this example even if TP is only 0.0001. ... careful! ROC Y = 1 FN Y = 0 BLACK&WHITE TP FP = 0 TN
Idealtest Three test outcomes
implies constant misclassification Continuous test Cutoff at x = c minimizes misclassification
Parallelogram Two binary tests and their 6 most important joint rules of interpretation
”Overhull” implies superiority slope = f(x) / g(x)
** Essence of the proof that ”overhull” implies superiority § * * ** §
Utility-based evaluation in general ∫(pdy + qdx )mina{ (LaDpdy +La,nonDqdx)/(pdy + qdx) } is how it looks when applied to the ROC (which contains the required information about the disease-conditional distributions). *
The area under the ROC (AuROC) is misleading You have probably seen my counterexample* before. Assume D and non-D equally frequent and also utilitywise symmetric … *Medical Decision Making 1991; 11: 95-101
Expected regret (utility drop relative to perfect diagnoses) Bxsens The tent graph Cxspec pretest