1 / 41

2 nd Workshop 'Consensus working group on the use of evidence in economic decision models‘

What can we learn from data ? A comparison of direct, indirect and observational evidence on treatment efficacy. 2 nd Workshop 'Consensus working group on the use of evidence in economic decision models‘ Department of Health Sciences, University of Leicester September 26, 2005

alaula
Download Presentation

2 nd Workshop 'Consensus working group on the use of evidence in economic decision models‘

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What can we learn from data ?A comparison of direct, indirect and observational evidence on treatment efficacy 2nd Workshop 'Consensus working group on the use of evidence in economic decision models‘ Department of Health Sciences, University of Leicester September 26, 2005 Tony Ades, Debbi Caldwell MRC Health Services Research Collaboration, Bristol

  2. Outline of presentation Introduction: learning about parameters … Fixed effect models • Direct data, Indirect data, • Observational data: one new study, a meta-analysis of observational data Random effect models What to learn about: mean, variance, new or old groups ? • Direct and indirect data in RE models • Observational evidence • … And Surrogate end-points New !

  3. Why might this be useful? 1. A “standard” systematic review is carried out. Has all relevant data been included ?” 2. Data is relevant if it reduces uncertainty … .. how effective might different kinds of data be in reducing uncertainty ? 3. Synthesis agenda  Research prioritisation agenda. Why collect more data if you don’t know what you can learn from it ? 4. …. a scientific basis for “Hierarchy of evidence” ?

  4. Data tells us nothing unless there is a model • You must have something to learn about: a parameter • If you know what you are going to learn about, you must know how much you already know about it: a prior distribution • There must be a relationship between what the data estimates and the parameter: a model

  5. … it’s partly a language thing • Need to distinguish between data, parameters and estimates. Terms like ‘Log Odds Ratio’ tend to get used as if these were all the same thing. • Meta-analysis gives a “summary”. Summary of what? …data, literature, estimates? No “summary” without a model … 3. “evidence” => MODEL => “medicine”

  6. FIXED EFFECT MODEL Uncertainty in prior LOR Parameter  Its prior distribution ~ N(0 , 02) LOR data from an RCT Y, with standard error S The model Y ~ N(, S2) FE : data estimates exactly the parameter we want. Uncertainty in data

  7. Method RCT gives DIRECT information on the parameter of interest. Strategy: how much can Indirect Comparisons, Observational Data, etc tell us about the parameter  … …. COMPARED TO the same amount of direct RCT evidence. Use standard deviation S as a measure of “information”

  8. Scale of the day : Log Odds Ratios • LORs for Treatment Effects usually well within the range –1 to +1 … corresponding to Odds Ratios 0.4 to 2.5 • And usually in range -0.5 to +0.5 … corresponding to OR : 0.6 to 1.65 • We need to think of uncertainty on this scale. values of 0 or S > 1 are HIGH, <0.25 LOW

  9. …the more you know, the less there is to learn • If prior uncertainty is large (0 high), posterior uncertainty is dominated by the amount of new data – ie by S • If prior uncertainty is already low, only a large amount of new data (S low) will make a difference.

  10. Indirect RCT evidence on parameter AB Target parameter AB ~ some prior Model for Indirect evidence: YAC ~ N(AC, SAC2), YBC ~ N(BC, SBC2) BC = AC -AB IF SAC = SBC = S, then indirect evidence is equivalent to direct data with sd =2S = 1.414 S

  11. … the weakest link BUT, the contribution of indirect comparisons depends on the weakest link: If SAC is high (weak evidence on AC), the contribution to ABis small, no matter how much evidence there is on BC (ie no matter how low SBC) …. don’t do big literature search on BC, if you know there is little evidence on AC (unless you are also interested in treatment C !!!)

  12. Multiple indirect comparisons Debbi Caldwell’s presentation: Contribution to ABrelative to a direct RCT of size S via YAC, YBC - ONE indirect comparator => 1.414 S and YAD, YBD TWOS … etc THREE 0.82 S … etc FOUR 0.71 S … etc FIVE 0.63 S

  13. Observational data: one new study Observational data is biased: it does not give us an direct estimate of . Instead : YOBS ~ N( + , SOBS2) … in any given case we don’t know how big the bias  is, or its direction …”unpredictability” Let’s have a distribution .. perhaps ..  ~ N(B , B2) … to describe our views about  ?

  14. Prior distribution for bias  ~ N(B , B2)….. • As a ‘first cut’, suppose B = 0 is our “best guess” • For B… how small / big might the bias be ? an OR of 1.1 either way seems rather optimistic an OR of 1.6 either way seems rather pessimistic 3. … assume these represent 95% credible limits on the amount of bias … (in a “typical” single Observational study) …for example

  15. In which: bias is on average 0.28 on log scale (either way)

  16. sB=0

  17. Some shortcomings of this analysis • Assumes that bias is not related to the true . Maybe larger  => larger bias ? … this could be modelled too .. 2. What is our belief about the “AVERAGE BIAS in OBSERVATIONAL STUDIES” : B ~ N(M , Exp-B2) .. B=0 would mean: M=0, and Exp-B = 0 … No !

  18. A more reasonable view of the “average bias” B ~ N(M , Exp-B2) .. • The consensus seems to be that observational studies tend to exaggerate effects, ie M>0 • No problem: if we knew M exactly, we could adjust!… • The problem is we don’t …ie Exp-B > 0.

  19. Summary : the single observational study Must include : uncertainty in the study bias, and uncertainty in the expectation of bias effects – and the size of the Obs study: YOBS ~ N( + ,, SOBS)  ~ N(B , B2) B ~ N(M , Exp-B2) .. => YOBS ~ N( + M,, SOBS2 + B2 + Exp-B2) ..

  20. Meta-analysis of Observational Studies (1) With ONE observational study  ~ N(B , B2) is interpreted as uncertainty in bias With several studies j = 1,2 … J studies j ~ N(B , B2) is interpreted as between-study variation in bias BUT, the values of B , B are the same … ….Variation => predictive uncertainty

  21. Tablets in Stone: I Variation and Uncertainty … Uncertainty is a state of mind. It can be reduced – by collecting more data Variation is a fact about objects, people, studies, estimates … It cannot be reduced Predictive uncertainty that arises from variation cannot be reduced

  22. Meta-analysis of Observational Studies (2) A random effect Observational meta-analysis would be YOBS-j ~ N( + j,, SOBS-j) data from study j j ~ N(B , B2) between-study variation in bias B ~ N(M , Exp-B2) uncertainty regarding “expected” bias => YOBS-j ~ N( + M , SOBS-j + B2) the mean of the M-A is a biased estimate of target parameter , biased by M… easily corrected … So M-A (if large!) avoids the large uncertainty B and replaces it with the smaller uncertainty Exp-B

  23. How uncertain are we about M? Can set some limits on uncertainty regarding “Average Bias” M If studies suggest that, eg, the average bias is an OR of 0.9, how uncertain is this .. . ? Credible limits 0.75 to 1.05 ? … etc. … or carry out a huge meta-meta-analysis and obtain a posterior distribution for B ~ N(M , Exp-B2)

  24. “Fixed effect” parameter: Summary 1. Indirect comparisons – “weakest link” effect, but large uncertainty reduction possible with >1 comparator .. 2. Observational data from single study very weak … between-study variation AND uncertainty in “average bias”. 3. The estimated mean from a random effect Observational meta-analysis more useful: ONLY uncertainty in average bias to worry about….

  25. Random Treatment Effect Models Every RCT j=1,2 …J estimates a different parameter j Yj ~ N(j, Sj2), The studies and their sampling error j ~ N(RE , RE2) Variation in the true effects, from a common RE distribution RE ~ N(0 , 02)Uncertainty in the mean RE ~ ? ? Uncertainty in the between-trials variation

  26. What do we want to learn more about ? • The mean effect : RE • between-study varianceRE2 • LOR j - in patient group / protocol studied before (d) LOR J+1 - in a new patient group / protocol from same distribution PROBLEM: RE is an ‘unbiased’ estimate … … but what is it an estimate of ???

  27. What can we learn from one new RCT ? • Not much about the mean effect RE unless we can assume RE .- between-studies variation - is very low (b) Not much about between-study varianceRE2 (c) LOR j : Efficacy in a patient group / protocol studied before .... Then back to a Fixed Effect model for that group/protocol … (split or lump?)

  28. What does an RE model tell us about the parameter of interest Given a RE distribution, ie RE, RE2 we can work out • What we can say about efficacy in a new group • How much does data on parameter j tells us about k (data on one group, but need info on another)

  29. What can we learn from observational studies, given an RE model ? • Difficult: is each observational study giving us a biased estimate of some j , or is it averaging over many j and estimating a RE ? …but no guarantee it’s the “same” RE as an RCT meta-analysis 2. At BEST (if many very large studies) the mean from a Observational MA is an estimate of (“RE”+ M) Only problem (still) uncertainty Exp-B about M

  30. What can we learn from Indirect Comparisons in a Random Effect context? MTC RE meta-analysis provides unbiased information on mean treatment effects AB via AC andBC • just as AB is informed byAC andBC inform. • Same “weakest link” effect … added bonus: far more information on RE

  31. What can we learn from Surrogate end-points • “Validated” surrogate end-points are rated high in the hierarchy of evidence …. • Validation, however, usually within trial

  32. What can we learn from Surrogate end-points ?Daniels & Hughes model. Tj ~ N(j,ST,j) Data on True End Point, study j Zj ~ N(j,SZ,j) Data on Surrogate End Point j = +  j, Regression relates true TEP to true SEP If we knew the regression parametersand , then information on SEP would be as good as information on TEP … but we DONT (… also, this is a RE model – one study does not say much)

  33. What do we really know about the regression slope ? • Regression of T against S in Untreated cohort studies motivated the surrogacy concept – plenty of data on and  … uncertainty small • But people insist we can learn about and  only from RCTs … back to uncertainty again .. ! • BUT, then they also want to assume and  are the identical regardless of treatment … flip back to unrealistic certainty !

  34. placebo active

  35. Surrogacy summary … 1. What is a realistic level of certainty in projecting from surrogate evidence to clinical end-points ? • Careful analyses of data required – in every case – • CD4+ cell count, • bone mineral density, • blood pressure • cholesterol, etc ….

  36. A Research Agenda - Observational Studies • Models for bias: additive, multiplicative, combined • Evidence-based estimates for Between-studies variation in bias B 3. Evidence based distribution B needed => values for average bias M and uncertainty Exp-B2 4. Do B andB depend on Type of Study: case-control, cohort, register; or on condition ?

  37. A Research Agenda - RCTs • Evidence-based estimates of Between-studies variation in treatment effect RE = 0 quite possible, but RE > 0.25 unlikely => 95% of ORs 0.6 – 1.65 around their median ? 2. The prior distribution  ~ N(0 , 02) why begin with complete ignorance about , 0 > 100… … LORs > 3 (either way) are VERY rare, 0 = 0.55

  38. Any data is ‘relevant’ if it reduces uncertainty…. depends on the model that relates it to the parameter of interest Towards a Hierarchy of RELEVANCE ??? … work in progress

More Related