490 likes | 683 Views
The Jimmy A Young Memorial Lecture. July 17, 2014 7:00 to 8:30 AM Marco Island, FL. Jimmy Albert Young, MS, RRT 1935 –1975. The NBRC has honored Jimmy’s memory and the contributions he made to respiratory care through this program since 1978 .
E N D
The Jimmy A Young Memorial Lecture July 17, 2014 7:00 to 8:30 AM Marco Island, FL
Jimmy Albert Young, MS, RRT1935 –1975 The NBRC has honored Jimmy’s memory and the contributions he made to respiratory care through this program since 1978 .
Jimmy Albert Young, MS, RRT was one of the profession’s most outstanding and dedicated leaders • 1935 – born in South Carolina • 1960 – 1966 – served as Chief Inhalation Therapist at the Peter Bent Brigham Hospital in Boston • 1965 – earned the RRT credential, Registry #263 • 1966 – 1970 – served in several roles including director of the education program at Northeastern University in Boston • 1970 – became director of the Respiratory Therapy Department at Massachusetts General Hospital • 1973 – became the 22nd President of the American Association of Respiratory Care • 1975 – was serving as an NBRC Trustee and member of the Executive Committee when he passed away unexpectedly In a 15-year career, -achieved the RRT -directed an education program -directed a hospital department -served as AARC President -served as an NBRC trustee
The Clinical Simulation Examination Then and Now
Presenter • Robert C Shaw Jr PhD RRT FAARC • NBRC Assistant Executive Director and Psychometrician
Conflict of Interest I have no real or perceived conflicts of interest that relate to this presentation. Any use of brand names is not meant to endorse a specific product, but to merely illustrate a point of emphasis.
Learning Objectives • Compare elements of the current RRT credentialing system to elements of the system that is planned for January 2015 • Compare the value of information that has been provided by results from the Clinical Simulation Examination to other elements of the RRT credentialing system • Describe features of the 20-problem Clinical Simulation Examination for which candidates should be prepared by January 2015
Compare elements of the current RRT credentialing system to elements of the system that is planned for January 2015 Objective 1
Compare the value of information that has been provided by results from the Clinical Simulation Examination to other elements of the RRT credentialing system Objective 2
Is there a measurement reason for the Clinical Simulation Examination to exist? Question
Scores from the Clinical Simulation Examination added information beyond the information from multiple-choice examination scores when predicting membership in three groups for candidates who sought the RRT credential. Research hypothesis 1
Defining the Population • Date range for examination attempts • October 22, 2009 through February 27, 2012 • A subset of 9,081 candidates had achieved CRT and made a first attempt at the remaining examinations for RRT (and were not outlying cases) • Written Registry • Clinical Simulation • Information gathering (IG) • Decision making (DM)
Statistical Model and Method • Step-wise discriminant analysis with automatic variable selection • Predictgroup membership from multiple variables, each of which is continuously distributed • Dependent variable • certification, certification+1, and registration groups • Independent variables • First run included four sets of scores • CRT, Written Registry, Clin Sim IG, and Clin Sim DM • Second run included two scores • CRT and Written Registry
Standardizing Examination Scores • Raw score ranges • CRT = 0 to 140 • Written Registry = 0 to 100 • Clinical Simulation, varied by test form • IG = a variable-min to a max in the range of 200-300 • DM = a variable-min to a max in the range of 140-170 • Each raw score was converted to a z-score where z = (x – mean) / S
Results from Run 1 Predictions about memberships in the registration group were accurate for 92.4% of the cases
Discriminant Score Equation • Discriminant score = 1.026 (Clin Sim DM z-score) + 0.975 (Written Registry z-score) + 0.091 (CRT z-score) • 0.010 (Clin Sim IG z-score) • 0.689 • Clin Sim DM and Written Registry scores were nearly equal and the dominant contributors to predictions about group memberships
Results from Run 2 .42 .30 Predictions about memberships in the registration group were accurate for 85.4% of the cases 92.4%
Conclusions • The research hypothesis was accepted • Scores from the Clinical Simulation Examination add information about RRT achievement beyond what is available from multiple-choice examination scores • If the Clinical Simulation Examination was removed from the system, there would be a 7% loss of accurate RRT classifications • Incompetent candidates would become RRT • Competent candidates would be denied RRT
Although there were four sets of test scores, three tests, and two types of tests, RRT competencies were based on only one cognitive construct. Research hypothesis 2
Risks from Using Multiple Examinations with Different Characteristics IG CRT Type of Examination Level of Examination DM WR WR CRT IG DM Simulation Multiple-Choice Advanced Entry
Statistical Model and Method • Principal components analysis with cross-validation • Explore the underlying variance structure within four sets of test scores • CRT • Written Registry • Clinical Simulation • IG • DM • Is useful for confirming a hypothesis, in this case the assertion that there is a common characteristic expressed by the four test scores
Preliminary Result 1 As an indicator of sampling adequacy -KMO should be at least .50 -Sig value should indicate statistical significance As indicators of positive cross-validation -KMO values should be about the same
Preliminary Result 2 As indicators of making a sufficient contribution to the principal component solution -Communality values should be at least .50, otherwise a variable should be removed As indicators of positive cross-validation -Values across each row should be similar
Primary Result The threshold for a consequential eigenvalue is 1.0 or Components at the inflection point and beyond lack consequence
Conclusions -The research hypothesis was accepted There was only one principal component to which all four sets of test scores were linked -Potential risks associated with using a multiple-examination system were avoided
Summary from Both Studies • Within the population of new RRTs each year, accurate classifications occur more often because there are multiple examinations • Risks associated with a credentialing system based on multiple examinations were avoided
Study Limitations • These were population studies involving a recent period of more than 2 years • Unless characteristics of candidates or examinations change, I expect these results will generalize into the future • Candidates: program admission criteria, program duration, program intensity • Examinations: number of instruments, types of measurements
Describe features of the 20-problem Clinical Simulation Examination for which candidates should be prepared by January 2015 Objective 3
Rationale for Changing the Simulation Examination • Instant scoring demands selection of problems for each new test form that have not changed • After a decade, keeping examination content current became an increasing challenge
Solution • Give the examination committee smaller content elements from which test forms are assembled • Halve the number of sections in problems • Double the number of problems • Hold testing time the same at 4 hours
As long as other changes will be made . . . Enhance psychometric properties
Problems Each Candidate Will See • 4 about COPD • 4 about children • 4 about general medical / surgical • 3 about trauma • 3 about cardiovascular • 2 about neuro • Likely one neuromuscular and one neurologic
Advantages of a one score and one cut system • A test with more items and more points than its predecessors will yield more accurate scores as indicators of candidates’ abilities • Pass and fail decisions become more accurate • Accuracy is gained without an increase in test administration time • Fee for the Clinical Simulation Examination stays the same
A Potential Disadvantage of a Combined Score • Compensation can occur unless the cut score policy is changed • Someone within a few points of passing based on decision making performance could pass by acquiring a higher percentage of available information gathering points
New Cut Score Policy The cut score for a test form must be the sum of MPLs from the two types of sections such that those section MPLs fall within the two ranges shown in the table Implementation has mandated addition of options labeled as requiredamong positively-scored options in IG sections
Illustrations that follow came from one test form Why the cut score policy change matters
People in this quadrant would pass under the current system Pass
Highlights for Students • The numbers of problems by patient type will be constant for each candidate • Testing time remains 4 hours • 22 problems will be presented • Results will be based on responses to 20 problems • As a result of a problem-splitting procedure • Some problems will not offer IG sections • Candidates will see the same number of IG sections across the whole examination as they currently see
Highlights for Students (cont.) • Responses will be summed across IG and DM sections that a candidate enters to produce one score to which a cut score will be compared • The cut will equal the sum of MPL values across sections along the critical path • Compared to the current examination, responses in IG sections will be consequential • Reduced tolerance for errors