260 likes | 279 Views
Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir. Karl Claxton and Tony Ades. Partial EVPIs. Light at the end of the tunnel……. ……..maybe it’s a train. A simple model of Zanamivir. Distribution of inb.
E N D
Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir Karl Claxton and Tony Ades
Partial EVPIs Light at the end of the tunnel…… ……..maybe it’s a train
Distribution of inb .026 Normal Distribution .020 Mean = (£0.51) Std Dev = £12.52 .013 .007 inb .000 (£40.00) (£20.00) £0.00 £20.00 £40.00
EVPI for the decision EVPI = EV(perfect information) - EV(current information)
Partial EVPI EVPIpip = EV(perfect information about pip) - EV(current information) - EV(optimal decision for a particular resolution of pip) EV(prior decision for the same resolution of pip) Expectation of this difference over all resolutions of pip
Partial EVPI Some implications: • information about an input is only valuable if it changes our decision • information is only valuable if pip does not resolve at its expected value General solution: • linear and non linear models • inputs can be (spuriously) correlated
Felli and Hazen (98) “short cut” EVPIpip = EVPI when resolve all other inputs at their expected value • Appears counter intuitive: • we resolve all other uncertainties then ask what is the value of pip ie “residual” EVPIpip ? But: • resolving at EV does not give us any information Correct if: • linear relationship between inputs and net benefit • inputs are not correlated
So why different values? • The model is linear • The inputs are independent?
“Residual” EVPI EVPI when resolve all other inputs at each realisation ? • wrong current information position for partial EVPI • what is the value of resolving pip when we already have perfect information about all other inputs? • Expect residual EVPIpip < partial EVPIpip
inb simplifies to: inb = Rearrange: pip: inb = pcz: inb = phz: inb = pcs: inb = phs: inb = upd: inb = rsd: inb = Thompson and Evans (96) and Thompson and Graham (96) • Felli and Hazen (98) used a similar approach • Thompson and Evans (96) is a linear model • emphasis on EVPI when set others to joint expected value • requires payoffs as a function of the input of interest
Reduction in cost of uncertainty RCUE(pip) = EVPI - EVPI(pip resolved at expected value) • intuitive appeal • consistent with conditional probabilistic analysis But • pip may not resolve at E(pip) and prior decisions may change • value of perfect information if forced to stick to the prior decision ie the value of a reduction in variance • Expect RCUE(pip) < partial EVPI
Reduction in cost of uncertainty RCUpip = EVPI – Epip[EVPI(given realisation of pip)] = [EV(perfect information) - EV(current information)] - Epip[EV(perfect information, pip resolved) - EV(current information, pip resolved)] spurious correlation again? RCUpip = Epip[EVPI – EVPI(given realisation of pip)] = partial EVPI
EVPI for strategies Value of including a strategy? • EVPI with and without the strategy included • demonstrates bias • difference = EVPI associated with the strategy? • EV(perfect information, all included) – EV(perfect information, excluded) Eall inputs[Maxd(NBd|all inputs)] – Eall inputs[Maxd-1(NBd-1|all inputs)]
Conclusions on partials Life is beautiful …… Hegel was right ……progress is a dialectic Maths don’t lie …… ……but brute force empiricism can mislead
EVSI…… …… it may well be a train Hegel’s right again! ……contradiction follows synthesis
EVSI for model inputs • generate a predictive distribution for sample of n • sample from the predictive and prior distributions to form a preposterior • propagate the preposterior through the model • value of information for sample of n • find n* that maximises EVSI-cost sampling
EVSI for pip Epidemiological study n • prior: pip Beta (, ) • predicitive: rip Bin(pip, n) • preposterior: pip’ = (pip(+)+rip)/((++n) • as n increases var(rip*n) falls towards var(pip) • var(pip’) < var(pip) and falls with n • pip’ are the possible posterior means
EVSIpip = reduction in the cost of uncertainty due to n obs on pip = difference in partials (EVPIpip – EVPIpip’) Epip[Eother[Maxd(NBd|other, pip)] - Maxd Eother(NBd|other, pip)] - Epip’[Eother[Maxd(NBd|other, pip’)] - Maxd Eother(NBd|other, pip’)] E(pip’) = E(pip) Epip[Maxd Eother(NBd|other, pip)] = Epip’[Maxd Eother(NBd|other, pip’)] pip’has smaller var so any realisation is less likely to change decision Epip[Eother[Maxd(NBd|other, pip)] > Epip’[Eother[Maxd(NBd|other, pip’)]
EVSIpip Why not the difference in prior and preposterior EVPI? • effect of pip’ only through var(NB) • change decision for the realisation of pip’ once study is completed • difference in prior and preposterior EVPI will underestimate EVSIpip
Implications • EVSI for any input that is conjugate • generate preposterior for log odds ratio for complication and hospitalisation etc • trial design for individual endpoint (rsd) • trial designs with a number of endpoints (pcz, phz, upd, rsd) • n for an endpoint will be uncertain (n_pcz = n*pip, etc) • consider optimal n and allocation (search for n*) • combine different designs eg: • obs study (pip) and trial (upd, rsd) or obs study (pip, upd), trial (rsd)…. etc