110 likes | 123 Views
The NESS study evaluated the 2009 specialty selection process for training against key criteria, including acceptability, fairness, effectiveness, and value for money. Data from multiple specialties and candidates were analyzed to determine the process's strengths and areas for improvement.
E N D
National Evaluation of Specialty Selection Hywel Thomas and Celia Taylor On behalf of the NESS team: Ian Davison, Steven Field, Harry Gee, Janet Grant, Andy Malins, Laura Pendleton and Elizabeth Spencer NESS was commissioned and funded by the Policy Research Programme in the Department of Health (Award number 016 0114). The views expressed are not necessarily those of the Department.
Background • Specialty selection process is one of the hurdles on the way to consultant/GP principal posts • 2009: 11,417 applicants for 6,580 entry-level posts (competition ratio 1.7 to 1) • Selection became increasingly politically sensitive following MTAS • Highlighted need for evolution and evaluation
Aims and scope of NESS • To evaluate the first round of selection for specialty training in 2009 against four key criteria: • Acceptability • Fairness • Effectiveness: Validity and Reliability • Value for money • 13 specialties included in the project • Data collection primarily in 5 deaneries • Did not obtain complete data for every specialty/deanery
Fairness: Effect of personal characteristics on selection scores Multiple linear regression analysis by specialty Standardised scores so comparability across specialties N=5 specialties and 1,553 candidates
Effectiveness: Predictive validity of shortlisting scores Pearson correlation coefficients: uncorrected and corrected for restriction of range and unreliability of shortlisting scores where possible N=8 specialties, 13 selection processes and 2,411 candidates
Effectiveness: Reliability • Internal Consistency: Cronbach’s alpha by station • N=10 specialties, 26 selection processes and 3,505 candidates • Range 0.35 to 0.83 • 10/26 (38%) in recommended range 0.7 to 0.9 • Inter-rater reliability: Station-level absolute intra-class correlations • N=4 specialties, 4 selection processes and 395 candidates • Range 0.54 to 0.91 • 16/17 (94%) above recommended minimum of 0.7 • Pass-Mark reliability (ignores sub-rules at station-level and only includes candidates attending interview) • N=5 specialties, 7 selection processes and 919 candidates • 12% to 55% of candidates within 1 SEM of appointment cut-off: raises concerns about fairness • 0% to 20% of candidates within 1 SEM of competency cut-off: raises concerns about competency
Value for Money • Costing model developed (http://www.education.bham.ac.uk/research/projects1/dissemination.shtml) • Modified Brogden’s model to estimate cost-benefit: • Cost-benefit depends on: • Selection process design • Predictive validity • Competition ratio • SD of training performance of candidates • Length of training • Drop-out rate • Number requiring extensions to training • Proportion unsuccessful candidates remaining in NHS • Cost estimates for ST1 selection: £3.2m for hospital specialties (£800 per post) and £2.4m for GP (£900 per post) • Cost-benefit estimates - compared to random selection - ranged from £78-97m for hospital specialties and £15-20m for GP
Summary and implications for selection • Largest study of specialty selection • Did not obtain complete data – but no evidence of response bias • High acceptability of selection processes by candidates and assessors • Shortlist scores are a good predictor of selection scores • Long-term follow-up is required on predictive validity, particularly to assess fairness (if scores are predictive then UK-trained candidates will make better trainees but need evidence) • Inter-rater reliability was good – but potential collusion? • Internal consistency and so pass-mark reliability could be improved: more stations with 1 assessor? • Only one specialty had a formal standard setting process to identify competency cut-off • Value for money could only be estimated – but suggests high returns to investment in selection • Selection has continued to evolve since 2009 e.g. increase in nationally-coordinated selection processes