370 likes | 597 Views
Markov Models II. HS 249T Spring 2008. Brennan Spiegel, MD, MSHS. VA Greater Los Angeles Healthcare System David Geffen School of Medicine at UCLA UCLA School of Public Health CURE Digestive Diseases Research Center UCLA/VA Center for Outcomes Research and Education (CORE ). Topics.
E N D
Markov Models II HS 249T Spring 2008 Brennan Spiegel, MD, MSHS VA Greater Los Angeles Healthcare System David Geffen School of Medicine at UCLA UCLA School of Public Health CURE Digestive Diseases Research Center UCLA/VA Center for Outcomes Research and Education (CORE)
Topics • More on Markov models versus decision trees • More examples of Markov models • Calculating annual transition probabilities • Time independent (Markov chains) • Time dependent (Markov processes) • Temporary and tunnel states • Half-cycle corrections
Disadvantages of Traditional Decision Trees • Limited to one-way progression without opportunity to “go back” • Can become unwieldy in short order • Difficult to capture the dynamic path of moving between health states over time • Often fails to accurately reflect clinical reality
Markov Models • Allow dynamic movement between relevant health states • Allow enhanced flexibility to better emulate clinical reality • Acknowledge that different people follow different paths through health and disease
Example Markov Model Inadomi et al. Ann Int Med 2003
Alive Barrett Alive No Barrett Year 0 Dead No Barrett Dead Barrett Markov Model
Markov Model Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Markov Model Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Markov Model Year 1 Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Markov Model Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Markov Model Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Markov Model End Alive Barrett Alive No Barrett Dead No Barrett Dead Barrett
Decision Trees and Markov Models may Co-Exist • Both provide different types of information • Information from both is not mutually exclusive • Markov model can be “tacked” onto end of a traditional decision tree
No Cirrhosis Normal Lifespan Cirrhosis Markov Model Virological Response Normal Lifespan No Cirrhosis Normal Lifespan No Response Cirrhosis Markov Model Response Start Adefovir Resistance No Response No Resistance Con’t Lamivudine No Therapy Inteferon Chronic HBV Lamivudine Adefovir Adefovir Salvage
To Cirrhosis Markov Model Markov Model #1 Uncomplicated Cirrhosis Chronic HBV Virological Resistance Chronic HBV on Treatment Virological Response Virological Relapse
Markov Model #2 Uncomplicated Cirrhosis Complicated Cirrhosis Hepatocellular Carcinoma Liver Transplant Death
No GI or CV Complications Dyspepsia Myocardial Infarction GI Bleed Post Myocardial Infarction Post GI Bleed Death
START Sub-Clinical HE Overt HE Clinical Response Hepatocellular Cancer Non-HE Complication Liver Transplantation Death
Annual Probability Estimates Annual Probability Estimate Cirrhosis in HBeAg(-) 4.0% Cirrhosis in HBeAg(+) 2.2% Chronic HBV liver cancer 1.0% Cirrhosis liver cancer 2.1% Compensated cirrhosis decompensated 3.3% Decompensated cirrhosis liver transplant 25% Liver cancer liver transplant 30% Death in compensated cirrhosis 4.4% Death in decompensated cirrhosis 30% Death in liver cancer 43%
40 = 8% 5 Converting Data Into Annual Probability Estimates Cannot simply divide long-term data by number of years Example: If 5-year risk of an event is 40%, then annual risk does not amount to:
Converting Data Into Annual Probability Estimates General rule for converting long-term data into annual probabilities: 1-(1-x)Y = Probability at Y Years
Example of Converting Long Term Data into Annual Probability If probability of bleed at 5 years = 0.40, then the annual probability = x, as follows: 1- (1-x)5 = 0.40 (1-x)5 = 1 – 0.40 (1-x)5 = 0.60 (1-x)= 0.902 x = 0.097 … or 9.7%
Example of Converting Long Term Data into Annual Probability Check for errors by back calculating using the inverse equation: 1-(1-annual probability)Y = probability at Y years 1-(1- 0.097)5 = 0.40 1-(0.903)5 = 0.40 0.40 = 0.40
Steps to Combining Time-Independent Transition Probabilities Step 1 Collect and abstract relevant studies Step 2 Select common cycle length Step 3 Convert all studies to common cycle length units Step 4 Calculate common cycle transition probabilities Step 5 Combine common cycle probabilities
Example Mean = 21.3% / 12-month cycle
Many Probabilities are Time Dependent • Time independence is usually a simplifying assumption • Progress though many systems in health care (biological, organizational, psychosocial, etc) are erratic and non-linear • May need to account for time-dependent transitional probabilities using: • Tables • Tunnels
Cycle Probability Cycle Probability 1 0.05 1 0.1 2 0.05 2 0.08 3 0.05 3 0.07 4 0.05 4 0.06 5 0.05 5 0.05 6 0.05 6 0.04 7 0.05 7 0.03 8 0.05 8 0.02 Time Independent Time Dependent Using Tables for Time-Dependent Probabilities • Tables allow transition probabilities to vary cycle-by-cycle • Allow greater precision for processes that are non-linear
Time Independent: Linear Curve Time Dependent: Non-linear Diminishing Returns Time Dependent: Non-linear Accelerating Returns Probability Cycle
Using Tunnels States • Some events can interfere with otherwise orderly Markov chains • Can get “stuck in a rut” that removes subjects from the usual flow of events • e.g. developing cancer • Tunnel states add flexibility to Markov models: • Model getting “stuck in the rut” • Compartmentalize processes into component states • Can model various “recovery states” from the “rut” • Can incorporate time-dependent transitions
Half-Cycle Corrections • In “real life,” events can occur anytime during a given cycle – it is usually a random event • The default setting for Markov models is for events to occur at the exact end of each cycle • Yet the default setting can lead to errors in the calculation of average values • Will tend to overestimate benefits (e.g. life expectancy) by about half of a cycle
Rationale for Half-Cycle Corrections “In whatever cycle a ‘member’ of the cohort analysis dies, they have already received a full cycle’s worth of state reward, at the beginning of the cycle. In reality, however, deaths will occur halfway through a cycle on average. So, someone that dies during a cycle should lose half of the reward they received at the beginning of the cycle.” - TreeAge Pro Manual, p476
1.0 0.8 0.6 0.4 0.2 0.0 Proportion Alive AUC=2 0 1 2 3 4 Cycle
1.0 0.8 0.6 0.4 0.2 0.0 Proportion Alive AUC=2.5 0 1 2 3 4 Cycle
1.0 0.8 0.6 0.4 0.2 0.0 Proportion Alive AUC=2.0ish 0 1 2 3 4 Cycle