1 / 21

Evaluating Medical Grand Rounds – 10 Years Later

Evaluating Medical Grand Rounds – 10 Years Later. Dr. Mary J. Bell, Christian Base, Edmund Lorens SACME Spring 2011 New York, NY. Disclosure. No apparent conflict of interest. Learning Objectives. Participants will be able to: Consider a reliable method of rounds presenter evaluation

Download Presentation

Evaluating Medical Grand Rounds – 10 Years Later

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Medical Grand Rounds – 10 Years Later Dr. Mary J. Bell, Christian Base, Edmund Lorens SACME Spring 2011 New York, NY

  2. Disclosure • No apparent conflict of interest

  3. Learning Objectives Participants will be able to: • Consider a reliable method of rounds presenter evaluation • Understand presenter characteristics most predictive of overall evaluation scores • Compare live and videoconferenced rounds feedback

  4. Introduction & Background • Grand Rounds plays major educational role at academic medical centres • Reasons to attend: • Food • Licensure • Competent practice • Social networking • Entertainment • Professional commitment to life long learning • Tradition • Habit

  5. Introduction & Background • RCPSC introduced MOC program in 2000 • Rounds accreditation criteria • Planning Committee • Learning Strategies • regularly occurring • learning objectives • variety of learning formats • minimum of 25% of time interactive • Evaluation • Ethical Guidelines • U. of T. DOM developed standardized evaluation method

  6. Methods • Evaluation form developed • Reliability of evaluation method estimated • Data collected over 10 years • Reliability of evaluation method determined • Changes over time determined • Subanalyses to determine impact of educational format changes

  7. The presenter: Provided objectives Demonstrated thorough knowledge Clear and organized Stimulated enthusiasm Appropriate level detail Effective visuals Effective presentation style 25% session interactive Good audience rapport Invited questions and participation Overall presenter evaluation The Evaluation Form Scale: 1-5 (Strongly agree – Strongly disagree) Scale: 1-5 (Outstanding-Unsatisfactory)

  8. Rothman & Sibbald, JCEHP 2002

  9. Rothman & Sibbald, JCEHP 2002 • Most presenters of grand rounds are rated in a narrow range. • Ranking individual presentations requires exceptionally high precision. • Separation into groups (quartiles) requires less precision. • This type of classification appears sufficient to enable planning decisions.

  10. 2010 Retrospective • ~51,000 evaluation forms 2002-2009 • Intra-Round dispersion of mean Presenter Effectiveness Score (PES) was assessed against number of forms • Intra-class correlations calculated as a function of the number of forms. • Inter-item correlations measured using Spearman r.

  11. Events & Forms by Format & Year

  12. Intra-Round Dispersion of Mean PES: Distribution of Standard Deviations (relative to number of forms).

  13. Empirically observed relationship of calculated ICCs to number of forms

  14. Item-Overall partial correlations suggested from Step-wise linear regression: Model vs. mean of excluded items & mean of all items 1 rPearson = .927 Mean of Overall, aggregated by Round event 1. Suggested (descending) order: Q7_mean (Style), Q5_mean (Level), Q4_mean (Enthusiasm), Q9_mean (Rapport), Q2_mean (Knowledge), Q1_mean (Objectives), Q8_mean (Interactivity).

  15. Order of predictor variables to overall presenter evaluation • Style • Level • Enthusiasm • Rapport • Knowledge • Objectives • Interactivity

  16. Data Transformation of by-Round Aggregated Means: Example regarding Q1 (Provided Objectives)

  17. Both regression models (transformed vs. untransformed) consistent as to suggested top predictors. Robustness to normality concerns. Best R2 estimated around .859. Mean of Overall, aggregated by Round event

  18. Conclusions • Approximately 22 forms to achieve reliability of .81. Arc pattern of diminishing returns also apparent from data as had been theoretically estimated. • Both regression models consistently suggest rank order of top four predictors based on partial correlations as follows: Style, Appropriate Level, Stimulation of Enthusiasm, & Establishing Rapport. • Relatively greatest improvements in Inviting Questions, Provision of Objectives, and Interactivity over period of 2000 to 2009. Demonstration of Knowledge consistently high from 2000-2009 (with least relative improvement). Overall improvements over time statistically significant. • CWMGR consistently lower than MGRs on all scales, 2006-2009. Small effect. • Based on best R2 estimate, discerned predictors appear to account for about 86% of variance in Overalls ratings.

  19. Limits & Lessons learned • Single centre data • Redundant questions • Some potential predictors of PES not assessed • Interest in topic • Technology

More Related