1 / 18

Vojtech Huser, MD PhD Laboratory for Informatics Development NIH Clinical Center

IDR Snapshot: Quantitative Assessment Methodology Evaluating Size and Comprehensiveness of an Integrated Data Repository. Vojtech Huser, MD PhD Laboratory for Informatics Development NIH Clinical Center. Research question. How can you evaluate an IDR?

izzy
Download Presentation

Vojtech Huser, MD PhD Laboratory for Informatics Development NIH Clinical Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IDR Snapshot:Quantitative Assessment Methodology Evaluating Size and Comprehensiveness of an Integrated Data Repository Vojtech Huser, MD PhD Laboratory for Informatics Development NIH Clinical Center

  2. Research question • How can you evaluate an IDR? • What makes a good IDR ? (for research)? • IDR A(in 2007) vs. IDR A (in 2012) • IDR A vs. IDR B • Other domains: • Compare countries based on GDP • per person and (PPP) adjusted (purchasing power parity) http://code.google.com/p/idrsnapshot/

  3. http://code.google.com/p/idrsnapshot/

  4. Motivation/Assumptions • Improving IDR • while acknowledging the a-priori limitations • Ideal IDR for a researcher • versus existing data • General measure on whole-IDR level • Not research project specific • Build on IDR surveys from 2007, 2008 and 2010 http://code.google.com/p/idrsnapshot/

  5. Examples • Academic medical center • with limited number of outpatient clinics • University of Utah • Integrated delivery network • Outpatient and inpatient records • Partners Healthcare • HMO • Health Plan component • Kaiser Permanente South California http://code.google.com/p/idrsnapshot/

  6. Target level (researcher-facing schema) • Level 1: Epic Clarity (6000+ tables) • Level 2: Clinical Data Repository (add data from GE Centricity (outpatient) (plus other sources) • Level 3: subset of CDR for health plan members only • Level 4: i2b2, or VDW, or OMOP, or XYZ (Deduce, Futher, BTRIS) http://code.google.com/p/idrsnapshot/

  7. Beyond core data sources • Diagnoses, Procedures, Labs • Cost data • ADT data (admission, ICU) • Visit data • specialty of the encounter clinician • Clinical document types • Biopsy, Well-child visit note, Bone mineral density report • Actual text on top of the document type • PHR usage data http://code.google.com/p/idrsnapshot/

  8. Less-common data sources • Insurance history data • Over the counter drug data • Death certificate data • Link to other sources • Pharmacy dispensing data • Out of network claims data • Health Plan data • Heath Assessment questionnaire data • Out of network pharmacy refills data http://code.google.com/p/idrsnapshot/

  9. How can we measure all this? http://code.google.com/p/idrsnapshot/

  10. What is a good measure? • Amount of common type of data • Many IDRs contain such data • diagnoses vs. tumor registry data • Count of unique patients • Desired feature by researchers • Complete record (e.g., claims data, pharmacy refills) • comparative effectiveness research (CER) • Expert consensus • Mixed approach http://code.google.com/p/idrsnapshot/

  11. Measures • Glasgow comma scale of 10 vs. 5 • Apgar score of 7 vs. 5 • Good measure: • Intuitive to interpret (count of patients) • facilitates monitoring and improvement • does not place any arbitrary value on individual measure components • (e.g., value of 10 years of medication history vs. 10 years of weight/height history). http://code.google.com/p/idrsnapshot/

  12. IDR v1: Marshfield Clinic (case study) http://code.google.com/p/idrsnapshot/ (Event-DOB) + ‘3000-01-01’

  13. IDR case study 1 (Marshfield Clinic) • Table generation: took 2.5 hours • Size: 43 GB (includes some additional info) • Initial set of measures (G1-2, D1-4, L1-2) http://code.google.com/p/idrsnapshot/

  14. “Consortia made easy(ier)” http://code.google.com/p/idrsnapshot/

  15. Case study 2 (NIH IDR) http://code.google.com/p/idrsnapshot/

  16. IDR snapshot (past and future) • Initial set of measures defined (version 1) • Applied at single institution (MC) • Use at second institution (NIH, BTRIS) • Scripts running against i2b2 schema (version 2) • Third institution potentially using it (in IRB stage) • ------- “you are here” ------- • Broader panel of experts developing (version 3) • Potentially a direct i2b2 plugin • Addition of other aspects • Direct qualitative component • Institutional component http://code.google.com/p/idrsnapshot/

  17. http://code.google.com/p/idrsnapshot/

  18. Next steps • Recruit IDR experts who would like to participate on the expert panel defining the measures (ver. 3) • Recruit sites willing to use it and potentially participate on joint publication presenting the data • as institution A, B, C, D… • IRB template available • (a) data only IRB option or even (b) under IDR’s master IRB: fast-track mechanism • If interested: • email: vojtech.huser@nih.gov • Stop by AMIA joint summit poster (March 2012) IDR Snapshot: Quantitative Assessment Methodology Evaluating Size and Comprehensiveness of an Integrated Data Repository (Huser V) • Thank you for your attention http://code.google.com/p/idrsnapshot/

More Related