1 / 16

A shared random effects transition model for longitudinal count data with informative missingness

A shared random effects transition model for longitudinal count data with informative missingness. Jinhui Li Joint work with Yingnian Wu, Xiaowei Yang. Outline. Informative missingness SPMTM (shared-parameter Markov transition models) Bayesian Inference Gibbs Sampling methods

gad
Download Presentation

A shared random effects transition model for longitudinal count data with informative missingness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A shared random effects transition model for longitudinal count data with informative missingness Jinhui Li Joint work with Yingnian Wu, Xiaowei Yang

  2. Outline • Informative missingness • SPMTM (shared-parameter Markov transition models) • Bayesian Inference • Gibbs Sampling methods • Application to smoking cessation data • Future work Li et al., UCLA Stat

  3. Data structure • Drug abuse research and longitudinal design : the effects of treatments or interventions are expected to change behavior over time. • Often missing data involved. Participants with drug dependence frequently miss their scheduled clinic visits or drop out of studies prematurely • The data contains the following: ----the repeated measures Y: ---- the missingness patterns R: ---- the covariates X : Li et al., UCLA Stat

  4. ? ? Variables ? Cases ? ? ? Missingness ---- Missing values on covariates (design matrix) ---- Intermittent missing values (repeated measures) ---- Missing values due to dropout (repeated measures) ? ? ? ? ???? ? ?????????? ???????????????? ?????????????????????? ? ? ? Li et al., UCLA Stat

  5. Missingness(2) We adopt the commonly accepted concepts about missingness(Rubin,1976; Little & Rubin, 2002): Y= . • MCAR(Missing completely at random): • MAR( Missing at random ): • NMAR (Missing not at random): Above conditions violated. Li et al., UCLA Stat

  6. Missingness(3) • The possible relationships between the repeated measures Y and the missingness patterns R in the case of NMAR: • While the case A and C are nonignorable missingness and the informative missingness refers to the case of shared-parameter missingness which is our focus. (A) Outcome-Dependent (B) Shared-Parameter (C) Pattern-Mixture     Li et al., UCLA Stat

  7. Shared parameter Markov transition model • Order-1 Markov Chain --- Modeling Repeated Measures: • 3-Category Logit Regression --- Modeling Missingness Indicators: Li et al., UCLA Stat

  8. Note on SPMTM • Here the shared parameter is the random effect. • Y and R are conditional independent, given , which can be assumed following normal distribution . • There are some minor constraints on the missingness model. (B) Shared-Parameter   Li et al., UCLA Stat

  9. Bayesian Inference • Denote the parameters as which follow certain prior distribution , and denote the data as with conditional distribution: • we can apply Bayes’ rule to get the posterior distribution: Li et al., UCLA Stat

  10. Bayesian Inference(2) • Then, how to get the statistics of the posterior distribution? By sampling. We can sample parameters which follow the posterior distribution and estimate the statistics with sample statistics . Li et al., UCLA Stat

  11. Sampling strategies • Gibbs sampling is adopted. Remember that we have missing data involved. Starting from some initial values, we have two iterative steps: 1.Imputation step, draw one intermittent missing value from the conditional predictive distribution: 2.Probability step,sample one single parameter conditional on all the others at a time: • Where “-” means exclusion. The above conditional distributions are got by fixing other things in and only leaving our target free. Li et al., UCLA Stat

  12. Sampling strategies(2) • Sampling from the conditional distributions: 1. Log-concave or log-concave after certain simple transformation case : Adaptive Rejection Sampling method ( Gilks and Wild(1992)). >> The basic idea is setting the upper hull and lower hull for the target distribution and update them with the rejected sampled point if there is any. 2. General case: Griddy Gibbs method ( Ritter and Tanner (1992)). >> The basic idea is estimating the support of the target distribution, discretize it to get a grid and sample from the grid points. In both cases, we can take the part not related to our target as constants to save the effort of integrating. Li et al., UCLA Stat

  13. Simulation • We are mostly concerned about how the estimation of the treatment effect change as the missingness pattern parameters change. We fix the treatment effect at –0.5 and set others parameters, simulate the data according to the model and estimate the parameters starting from different random initial values 10 times : Li et al., UCLA Stat

  14. Application to smoking cessation data • The dataset contains the number of cigarettes smoked reported once a week by 172 smoking persons during a 12 weeks clinical trial. There are few intermittent missing and some dropouts. • We are mostly concerned about the treatment effect . To evaluate the performance of our algorithm, we randomly generated some percent of intermittent missingness in the data 20 times and estimate the parameters. Li et al., UCLA Stat

  15. Future work • Extend the model to the cases when the repeated measure follows continuous distribution, e.g. normal distribution. Li et al., UCLA Stat

  16. References • Gilks, W. R. and Wild, P. (1992) Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41, pp 337-348. • Ritter, C., and Tanner, M. A. (1992). Facilitating the Gibbs Sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler. Journal of the American Statistical Society, 87, 861--868. • Rubin, D. B. (1976). Inference and missing data, Biometrika 63. 581-582. • Little, R. J. A. and Rubin, D. B. (2002). Statistical Analysis with Missing Data, 2nd edition, New York: John Wiley. Li et al., UCLA Stat

More Related