320 likes | 471 Views
Simulation for Clinical: Does it Count? Johnson County Simulation Conference 2010. Mary Meyer, RN, MSN Assistant Professor, Director Learning Lab
E N D
Simulation for Clinical: Does it Count? Johnson County Simulation Conference 2010 Mary Meyer, RN, MSN Assistant Professor, Director Learning Lab Helen Connors, RN, PhD, Dr PS (Hon), FAAN Associate Dean for Integrated Technologies and E. Jean M. Hill Endowed Professor Executive Director, Center for Health Informatics
Thoughts about teaching nursing Apprenticeship model – does it still fit? Ever widening dichotomy between Student’s educational needs Agency’s needs (high quality, safe, care) Agency is increasingly complex ; technology and acuity Physically demanding for faculty Preceptors help, but are increasingly difficult to recruit. Apprenticeship model expensive Clinical placements are increasingly difficult to secure, especially in specialties.
The Role of Simulation: A review of the literature Systematic review by Cant & Simon, 2009 • Simulation-based learning in nursing education • Quantitative studies1999-2009 12 studies (experimental or quasi design) • 100% report Sim is a valuable teaching strategy • 50% report Sim groups outperformed controls
The Role of Simulation: A review of the literature Cant & Simon, 2009 (cont.) Outcomes: Knowledge Critical thinking Satisfaction Confidence What about actual Clinical Performance?
The Role of Simulation: A review of the literature (cont) Alinar, Hunt, Gordon and Harwood (2006) Control group: traditional clinical Experimental group: Sim + clinical Pre-test/post-test design Compared clinical performance between groups using OSCE scores. Both groups improved Sim: 14-18 points higher (95%CI 12.52-15.85) Control: 7-18 points higher (95%CI 5.33 – 9.05)
Effect of High-fidelity simulation on nursing students’ knowledge and performance: A Pilot Study Hicks, Coke & Li, June 2009 3 groups: N = 58 (a combination of 2 cohorts) Clinical (19) Simulation (19) Clinical + Simulation (20) Knowledge, clinical performance and confidence levels. • Pre-Post multiple choice exams • Standardized clinical testing (OSCE) performance • Differences in confidence levels (Likert scale)
CLINICAL PERFORMANCE Can clinical faculty tell a difference between students who have attended simulation and those that have not?
Research Questions 1. Are there differences in clinical performance between nursing students who have participated in a pediatric simulation experience prior to clinical rotation and those that have not? 2. Is there a relationship between the timing of the simulation experience within the eight week clinical rotation and student clinical performance? 3. Does simulation offer more opportunities for clinical judgment and inter-professional communication than in a traditional setting?
Simulation framework • KUSON simulation template developed from Jeffries Model • Simulation Development • NP’s from CMH (contract) • Collaboration with JCCC Simulation faculty • Patient cases: 2 per 6 hour day • Simulation faculty: Graduate teaching assistants with experience in pediatric nursing, teaching UG nursing, and/or simulation.
Methods Approval from Human Subjects Committee Quasi-experimental design; Sample: 120 nursing students enrolled in Pediatric Clinical Course a. Spring Block 1 (8 weeks) 40 students b. Spring Block 2 (8 weeks) 40 students c. Summer (8 weeks) 40 students 4 students not included in evaluation data 1 left the program prior to simulation assignment 3 failed to complete sim during assigned time d/t absence.
Demographics Gender: Male 15 (12.5%) Female 105 (87.5%) Female
Work Experience • None • < 1 year • 1-3 years • > 3 years
Students assigned to 1 of 5 Clinical Groups per customary school protocol Clinical group size reduced to 6 by 2 to simulation week 2 2 week 4 and so on. Clinical group = 6 2 to simulation Every 2 wks 2 students *randomly selected from each of the 5 clinical groups for Sim. 10 students in Simulation lab. 30 students in Clinical Clinical group = 6 2 to simulation Clinical group = 6 2 to simulation Clinical group = 6 2 to simulation 1st block assignments made by convenience*
The Cases Infant Child Asthma Type I DM (teenager) Post-op Appe with PCA Sepsis • RSV • Home visit #1: Failure to Thrive • Home visit #2: Cystic fibrosis • Seizure
The Simulation Day: • Pre-learning phase: Asynchronous • Cases assigned in EMR. • Simulation time = to Clinical time • 4 – 6 hr days or 2 – 12 hr days
The Simulation Day (cont.) • Case #1: 5 students with Child • 1st run (20 min) • Debrief (30 min) • 2nd run w/twist (20 min) • Debrief (20 min) • Case #2: 5 students with Infant • (same as above.) • Multi-professional teams for 2 cases
Evaluation Methods Students evaluated every 2 weeks faculty of record (clinical or simulation faculty). Criteria (Likert scale): • Preparation (1X) • Student-Client Communication (1X) • Therapeutic Skills (1X) • Clinical Judgment (2X) • Inter-professional Communication (2X) • Documentation (1X) Evaluations were kept confidential and were not shared with the student.
Statistical Analyses Results were analyzed using repeated measure analysis with the mixed model to evaluate simulation effect on student’s overall clinical performance. The following covariate effects were considered in the mix model and had no overall effect on their overall performance: Prior work experience (P = 0.78) Gender (P = 0.45) Clinical site (P = 0.12) Clinical time (P = 0.06) (Clinical faculty effect was confounded with clinical time and site.)
Statistical Analyses (cont.) The Compound Symmetry model with SAS Mixed procedure was used to control the correlation between weeks within each subject, that is the Intra Cluster Correlation. Statistical significance was determined at the 5% level.
Comparison of Clinical Evaluations Week 4 scores Sim group contrasted with week 2 scores clinical groups. 0.19 ±0.88 (mean ± standard deviation; P=0.83) higher than students who attended clinical at the first week. None of the contrasted differences is statistically significant at the 5% level.
Comparison of Clinical Evaluations Week 6 scores Sim group compared with week 4 scores clinical groups. 2.16 ±0.89 (mean ± standard deviation; P=0.017) higher than the week 4 score for non treatment group. Simulation at the beginning significantly improved student’s clinical performance at mid term. .
Results Over all, student’s biweekly performance was highly correlated (ICC: 8.79/15.20 = 0.578) Indicates clinical performance was fairly consistent throughout a term. Nursing students who have participated in a pediatric simulation experience prior to clinical rotation had overall higher performance score that those who have not (mean ± standard error 1.74±0.75, P=0.02).
Results (cont.) Effects of simulation (Analysis with the generalized cumulative logit model) • Significant and positive effects on therapeutic skills P = 0.02 • Marginal positive effects • Inter-professional communication (P= 0.052) • Documentation (P = 0.05) • Student client communication (P = 0.06) • No effect on • Preparation (P = 0.194) • Clinical judgment (P = 0.360)
Results (cont) Timing of the simulation experience had no significant effect on student overall performance (P = 0.244). Those who attended simulation first had slightly higher mean score (26.12) than those who attended week 4 (26.07), week 6 (24.71) and week 8 (25.69).
Does simulation offer more opportunities for clinical judgment and inter-professional communication than in a traditional setting? • Clinical judgment Yes (P=0.023, odds=1.66 (1.08, 2.56; 95%CI) • Inter-professional communication Yes (P=0.0006, Odds=1.87 (1.22, 2.87; 95%CI)
Limitations of the study Evaluation tool • Created to fit the purpose of the study • Simplicity for faculty use was a priority • Similar to current clinical evaluation tool • Subject to all of the limitation of clinical evaluation • Inter-rater reliability was established during faculty training session • Reliabilities are acceptable, face validity is strong but construct validity has yet to be established Study was not double blind