210 likes | 222 Views
This study examines the impact of researchers' choices on personalized clinical analyses using ecological momentary assessment (EMA) data. The research questions focus on the confidence in person-centered EMA analyses, the variation in analytical approaches, and the outcomes based on analytical choices.
E N D
1 of 19 Time to get personal? The impact of researchers’ choices on clinical person-centered analyses Kunkels, Y. K. 1*, Bastiaansen, J. A. 1,2, Albers, C. J. 3, Bringmann, L.F. 1,3 *Corresponding / presenting author (y.k.kunkels@umcg.nl) 1 Interdisciplinary Center Psychopathology and Emotion regulation, Department of Psychiatry, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands; 2 Department of Education and Research, Friesland Mental Health Care Services, Leeuwarden, The Netherlands; 3 Department of Psychology, University of Groningen, Groningen, The Netherlands
2 of 19 What is Ecological Momentary Assessment (EMA)? Intensive, longitudinal research method with multiple measurements over time. Advantages: • Dataset rich in information • See trajectories of diseases / disorders • Better ecological validity Therefore, many calls to use EMA in clinical practice.
3 of 19 Research Questions • Are researchers confident that person-centered EMA analyses are ready for use in clinical practice? 2. How much do researchers vary in their analytical approach towards individual time series data? 3. To what degree do outcomes vary based on analytical choices?
4 of 19 Procedure 1 Dataset, 1 research question, multiple research teams. Teams decided on their own analytical approach to answer the following research question: “What symptom(s) would you advise the treating clinician to target subsequent treatment on, based on person-centered analysis of this particular patient’s EMA data?”
5 of 19 Procedure • Experts were selected for expertise in EMA and/or statistical analysis of individual time-series data. • Experts were free to compose their own research team. • 12 teams submitted a report and the accompanying script.
6 of 19 Dataset Specification Data are from an ongoing multiphase personalised psychotherapy study (Fisher & Boswell, 2016). One dataset selected based on criteria: • Primary diagnosis of MDD • At least 100 time points with some missingness Selected subject (ID3) was a caucasian 25-year old male with primary diagnosis of MDD and comorbid GAD. The dataset was selected to be representative for the field.
7 of 19 Dataset Specification 23 EMA items related to mood & anxiety 4 times a day, at least 30 days prior to therapy. Random beeps with at least 30 min. between. Items were scored on a scale from 0 (not at all) to 100 (as much as possible). Down and Tension items plotted over time while showing missingness. The y-axis shows item score while the x-axis shows the measurement time points.
8 of 19 Results Overview • Pre-processing • Selection of variables • Clustering • Missing data 2. Inferential statistics 3. Target selection 4. Evaluation
9 of 19 Results - Selection of Variables 10 out of 12 teamsincluded all variables except sleep • One team excluded the “avoid activities” item due to low within person variability (SD < 10% of the scale; Brose & Ram, Zevon & Tellegen, 1982). • They also noticed that the “tension” item might have changed during the course of the study. This item was first positively but the negatively correlated with the positive affect items. • The “tension” item also loaded on different clusters and was thus not a well discriminating item. However, most teams did not explicitly check item validity.
10 of 19 Results - Clustering • 9 out of 12 teams performed some sort of clustering. • 7 teams did perform statistical clustering (e.g. E(D)FA / C(D)FA / PCA). • 2 teams clustered based on theoretical reasoning. Number of clusters also varied (range: 0 - 9) However, teams sometimes gave identical names to clusters (e.g. “Positive Affect”), and the items in these clusters varied highly.
11 of 19 Results - Clustering • Variation in Cluster Content Example of Positive Affect (PA):
12 of 19 Results - Handling Missing Data • How missingness was handled was often not explicitly stated. • 7 teams used listwise deletion. • 2 teams used Bayesian methods (Kalman filter) to account for missings. • 1 team used smoothing techniques such as Loess plots and regression splines which smoothed over missing time points.
13 of 19 Results - Inferential Statistics Most teams used VAR based analysis methods. A VAR model is a system of multivariate models where each variable is explained by its own past values and the past values of all other variables in the system.
14 of 19 Results - Inferential Statistics 4 teams looked at the mean of the emotions and possible changes herein. 5 teams studied the contemporaneous effects. 11 out of 12 teams studies the dynamic lagged effects. • Mostly the lag-1 relations were studied. • Only two teams included Lag-2 relations. • Even though some teams followed near-identical procedures, none of these teams had the same autoregressive and cross-lagged effects.
15 of 19 Results - Targets for Intervention Wide range of reported targets for intervention and number of targets (range: 0 - 16)
16 of 19 Results - Evaluation Confidence in own approach and other teams obtaining same results
17 of 19 Results - Evaluation Confidencein usefulness of obtained information and readiness of person-centered analyses for clinical practice.
18 of 19 Conclusion & Discussion “If everyone comes up with the same result, then scientists can speak with one voice. If not, the subjectivity and conditionality on analysis strategy is made transparent” (Silberzahn) • There is wide variety in reported targets for intervention, which seem conditional on the employed analytical approach. • Perhaps consensus guidelines and standardisation of analyses steps could improve robustness over analytical choices before person-centered EMA analyses can be used as a tool for clinicians and patients.
19 of 19 Thanks for your attention! Acknowledgements Frank Blaauw, Steven Boker, Eva Ceulemans, Meng Chen, Sy-Miin Chow, Peter de Jonge, Ando Emerencia, Sacha Epskamp, Aaron Fisher, Ellen Hamaker, Peter Kuppens, Wolfgang Lutz, Joseph Meyer, Robert Moulder, Zita Oravecz, Harriëtte Riese, Julian Rubel, Oisín Ryan, Michelle Servaas, Gustav Sjobeck, Evelien Snippe, Timothy Trull, Wolfgang Tschacher, Date van der Veen, Marieke Wichers, Phillip Wood, William Woods, Aidan Wright This project was initiated by the iLab Psychiatry, Groningen, the Netherlands (https://ilab-psychiatry.nl/en_US/) References Brose, A., & Ram, N. (2012). Within-person factor analysis: Modeling how the individual fluctuates and changes across time. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 459-478). New York: Guilford Press. Fisher, A., & Boswell, J. (2016). Enhancing the Personalization of Psychotherapy With Dynamic Assessment and Modeling. Assessment, 23(4), 496-506. doi: 10.1177/1073191116638735 Nelson, B., McGorry, P., Wichers, M., Wigman, J., & Hartmann, J. (2017). Moving From Static to Dynamic Models of the Onset of Mental Disorder. JAMA Psychiatry, 74(5), 528. doi: 10.1001/jamapsychiatry.2017.0001 Zevon, M. A., & Tellegen, A. (1982). The structure of mood change: An idiographic/nomothetic analysis. Journal of Personality and Social Psychology, 43(1), 111-122. http://dx.doi.org/10.1037/0022-3514.43.1.111
Results - Evaluation Assessment of suitability of the Dataset
Conclusion & Discussion - Future directions First initiative to assess robustness over outcomes in subjective analytical choices for one individual time-series EMA dataset. Future studies could focus on studying: • a wider variety of datasets with different measurement frequencies. • the same dataset with different amounts of stochastic noise (simulations) • different types of individual time-series data (heart rate, physical activity, etc).