220 likes | 227 Views
The Art and Science of Resident Assessment What Medicine Can Learn from Figure Skating. Learning Objectives. Describe common rater errors - halo, leniency, undifferentiation, and range restriction Identify biases that lead to rater error
E N D
The Art and Science of Resident Assessment What Medicine Can Learn from Figure Skating
Learning Objectives • Describe common rater errors - halo, leniency, undifferentiation, and range restriction • Identify biases that lead to rater error • Implement strategies to reduce rater error and improve rating effectiveness No conflicts of interest to disclose
Flaws in the System • Judges unaware of what to observe/untrained • Not given criteria • No frame of reference • Ratings based on global impression • Influenced by emotion, mood, fatigue • Ratings based on comparison rather than standards
Common Rater Errors • Halo error • Leniency error • Undifferentiation • Range restriction
Halo error • Thomas, M. et. al : IM residents at Mayo, Rochester evaluated in 7 distinct domains; inter-item correlation was high (0.68)* *Thomas MR, Beckman TJ, Mauck KF, Cha SS, Thomas KG. Group assessments of resident physicians improve reliability and decrease halo error. J Gen Intern Med. 2011; 26 (7): 759-64.
Leniency error • Schwind et. al: 13/1,986 evaluations of surgery residents indicated a deficit; among residents who received a negative end of year decision, most received ratings of “outstanding,” “very good,” or “good” on rotation evaluations* *Schwind CJ, Williams RG, Boehler ML, Dunnington GL. Do individual attendings’ post-rotation performance ratings detect residents’ clinical performance deficiencies? Acad Med. 2004; 79 (5): 453-57.
Undifferentiation • Silber et. al: 1367 residents at 2 IM programs evaluated with a 23-item global rating form encompassing all 6 ACGME competencies; factor analysis showed faculty rated residents in 2 domains, medical knowledge and interpersonal & communication skills* *Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global ratineg forms enable program directors to assess the ACGME competencies? Acad Med. 2004; 79 (6): 549-56.
Range restriction • Silber et. Al: 72% of residents rated “above expected” or “outstanding” for professional ethical standards; 73% for empathy* *Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global ratineg forms enable program directors to assess the ACGME competencies? Acad Med. 2004; 79 (6): 549-56.
OLYMPICS: FIGURE SKATING; Canadian Skaters Awarded Share of Olympic Gold; French Judge Suspended, Her Scoring Thrown Out
Milestones and the Dreyfus Model Dreyfus, Stuart E ; Dreyfus,Huber L. Formal versus Situational Models of Expert Decision-Making Formal versus Situational Models of Expert Decision-Making, Apr 1981, California Univ Berkeley Operations Research Center PE61102F, WUAFOSR2313A2
Cognitive Memory distortion, recall impacted quickly Attention bias toward only two dimensions—clinical skills and professionalism Emotions/Mood/Stress influence whether attention is placed on pleasant or unpleasant events and whether a broad or narrow, incident or trend field is chosen Rater Error and Unconscious Bias Williams, RG, Klamen, DA, McGaghie, WC. 2003. Cognitive, Social and Environmental Sources of Bias in Clinical Performance Ratings. Teaching and Learning in Medicine. 15(4) 270-292.
Rater Error and Unconscious Bias Social • Age, Ethnicity, Sex • Physical attractiveness • Personal characteristics and likability • Influence of other raters’ judgments • “Impression management”- ratee discerns and adapts to raters’ preferences • Hawthorne effect - knowledge of observation changes behavior
Rater Error and Unconscious Bias Context • Time pressure and distraction - Rater is engaged in competing responsibilities (teaching, supervising, observing) • Observations are “noisy” and fragmented. Gaps in observations are filled with assumptions. • Cases vary in difficulty and skills required; settings vary • Difficult to appraise an individual in a team work environment where work is delegated and shared
Adapted from Holmboe and Hawkins (2008). Practical Guide to Evaluation of Clinical Competence. Philadelphia, PA: Mosby , Inc. pp. 36-37
Rater Errors in CCC’s • Halo error • Leniency error • Undifferentiation • Range restriction
Rater Training Activity • Develops and achieves comprehensive management plan for each patient. (PC2) • Manages patients with progressive responsibility and independence. (PC3) • Clinical knowledge (MK 1) • Learns and improves at the point of care (PBLI 4) • Works effectively within an interprofessional team (e.g. peers, consultants, nursing, ancillary professionals, and other support personnel) (SBP 1) • Has professional and respectful interactions with patients, caregivers and members of the interprofessional team (e.g. peers, consultants, nursing, ancillary professionals and support personnel). (PROF1) • Communicates effectively with patients and caregivers. (ICS1)
Strategies to Reduce Error and Improve Rating • Set and create mindfulness around purpose • Know and understand milestones you will assess • Know/reflect/reduce your own unconscious bias • Use observation and objective data as much as possible • Commit to continuous improvement in rating skill – FACULTY DEVELOPMENT • Discuss ratings as a group - CLINICAL COMPETENCY COMMITTEE
What Can Be Learned From Figure Skating? • Metaphor emphasizes professional responsibility and public trust • Within a profession there are salient behaviors that must be observed and scored • Training/practice for observation of salient behaviors is required • Raters should be aware of unconscious bias • Consistencies and inconsistencies between raters should be monitored, questioned and discussed for continuous improvement in rating quality
Acknowledgements • Jaya Raj, MD, FACP and Patti M. Thorn, PhD, St. Joseph’s Hospital & Medical Center • Diana McNeil, MD, Duke University School of Medicine and Director, DUKE AHEAD • AAIM Clinical Competency Committee Collaborative Learning Community • AAIM Innovation Committee • Lauren Meade, MD,BaystateMedical Center
References • Thorndike EL. A constant error in psychology ratings. J Appl Psychol. 1920; 4 : 25-29. • Thomas MR, Beckman TJ, Mauck KF, Cha SS, Thomas KG. Group assessments of resident physicians improve reliability and decrease halo error. J Gen Intern Med. 2011; 26 (7): 759-64. • Schwind CJ, Williams RG, Boehler ML, Dunnington GL. Do individual attendings’ post-rotation performance ratings detect residents’ clinical performance deficiencies? Acad Med. 2004; 79 (5): 453-57. • Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global ratineg forms enable program directors to assess the ACGME competencies? Acad Med. 2004; 79 (6): 549-56. • Holmboe and Hawkins (2008). Practical Guide to Evaluation of Clinical Competence. Philadelphia, PA: Mosby , Inc. pp. 36-37 • Williams, RG, Klamen, DA, McGaghie, WC. 2003. Cognitive, Social and Environmental Sources of Bias in Clinical Performance Ratings. Teaching and Learning in Medicine. 15(4) 270-292. • Raj JM, Thorn, PM. A faculty development program to reduce rater error on milestone-based assessments. Journal of Graduate Medical Education, December 2014; 680-85.