340 likes | 353 Views
This study explores the types of validity and stages of development for the Mega Code Checklist, including face, content, criterion-related, and construct validity. The study also examines the challenges and advantages of using the checklist as a guide and scoring device.
E N D
Mega Code Dr. Nalini Singhal University of Calgary September 2006
Types of Validity • Face • Assessment of instrument appearance • Does the checklist measure what it is supposed to measure? • Content • Adequacy of sampling of the domains that should be examined • Does the checklist contain the content it should contain?
Types of Validity • Criterion related (predictive or concurrent) • Correlation between this instrument and some other current instrument or future outcome • Does this checklist correlate with performance on another instrument or some future outcome test? • Construct • Instrument measures the hypothesized skill or ability • Does performance on this checklist vary by location of practice, type of professional or instructor? • Do professionals who follow the sequencing in checklists have better neonatal outcomes?
Challenge of Current Checklists • We do not know whether the current checklists are being used to: • Guide candidates and examiners regarding the spectrum of items to be included or considered in a resuscitation • Score candidates to determine mastery of skill
As a Guide Advantages • Comprehensive • Thorough Disadvantages • Assumes all items are equally important and all need to be done in order presented in checklists
As a Scoring Device Advantages • Comprehensive • Thorough list Disadvantages • Do not provide a method for scoring as cannot tell what is done, not done, N/A • Have not established whether order is important • Have not established item weight
Stages of development • Experts (Neonatologists) drafted a checklist (2002) • NRP committee reviewed the checklist and revised it. Face and content validation by Neonatologists, nurses, and others
Stages of development…cont’d • Checklist sent to 8000 instructors (by e-mail) (2003). 822 reviewed checklist and provided data about: • Criticality of each of the items • Perceived ability to assess students on the items
Stages of development…cont’d • Developed 28 video clips re mega code performance at 5 sites. Recruited 17 instructors who reviewed 2-9 video clips each. Used the newly revised 19 item instrument with scale of 0 (not done), 1 (partially done), 2 (done) to assess performance. Also provided data about ease of assessment. (2004) • Resulted in 20 item checklist and scale of 0= not done; 1 = partially done; 2 = done
Conclusions • High participation levels, fairly representative of the usual participants and instructors • Internal Consistency Reliability: Cronbachs Alpha .69 (overall)
Conclusions ….cont’d • Validity: • Face • Content • Criterion: with written examination evidence. This provides for convergent and divergent validity. (Correlation of .22 between megacode and written exam). • Construct: Ratings not particularly affected by sociodemographic characteristics of student or instructor
Student Factors • What student factors contribute to Mega Codes going poorly?
Student Factors • Poor Preparation • Did not read the textbook • Did not practice at skill stations • Not Motivated • “Required” to complete the course • Don’t feel it’s part of their job • “I do this all the time, do I really have to show you?” • Other things on their mind
Student Factors • Nervous • Don’t know what to expect • Intimidated by the instructor • Unaccustomed to the “leader” role • Afraid to look bad in front of a colleague or a professional that you supervise
Instructor Factors • What instructor factors contribute to Mega Codes going poorly?
Instructor Factors • Not providing enough practice opportunities • Not giving effective feedback • Incomplete instructions • Intimidated by the learner • Not providing an optimal setting • Poorly planned scenarios
Practice and Setting • Opportunities for structured practice prior to Mega Code • Students can help to “teach” each other and give feedback • Have skill stations available for “off shift” staff to practice before course • Set up your Mega Code in a realistic setting, not the conference room table
Demonstrations • Demonstrate a Mega Code • Interactive Mega Code (a group practice) • Instructors demonstrate a Mega Code in “real time”
Expectations • Make them clear • The Mega Code is a performance evaluation. • The Mega Code is not a practice session. • If students are unprepared, stop, give constructive feedback, encourage more practice, offer another Mega Code opportunity.
Mega Code 2006 • You have a standardized set of instructions to read to the student(s) before the Mega Code starts. • The Mega Code is scored.
Mega Code 2006 • Skills from each chapter that the student completes, within their scope of practice, must be tested in the scenario. • All tested skills must be demonstrated. • Students cannot “talk through” skills.
Mega Code 2006 • Students completing the medication lesson must draw up “epinephrine” correctly. • Mega Code occurs in “real” time. • If a student fails the Mega Code twice, recommend additional practice and come back another time.
Scoring the Mega Code • Mega Code is scored • 0 points = not done • 1 point = done inadequately or out of correct sequence • 2 points = done well and in correct order
Scoring the Mega Code • Minimum Mega Code = PPV and Chest Compressions • Minimum 14 items (28 points) • Maximum 19 items (38 points) • The number of points varies depending on the complexity of the scenario you give.
Scoring the Mega Code • You must achieve 2 points for each of the following 5 items: • Checks bag, mask, and oxygen supply • Indicates need for PPV (apnea or HR < 100) • Provides PPV correctly • Indicates how to correct chest not moving if there is no improvement in HR • Correctly provides chest compressions (HR < 60) • Must achieve 85% of total possible points