1.11k likes | 1.92k Views
Assessment, Feedback and Evaluation. Vinod Patel & John Morrissey. Learning outcomes. By the end of this session you will be able to : Define assessment, feedback and evaluation Discuss how these are related and how they differ Discuss the application of each in clinical education.
E N D
Assessment, Feedback and Evaluation Vinod Patel & John Morrissey
Learning outcomes By the end of this session you will be able to : • Define assessment, feedback and evaluation • Discuss how these are related and how they differ • Discuss the application of each in clinical education. • Begin to apply them in practice
Definitions Assessment : theory & practice Tea break Feedback Evaluation : theory & practice Questions and close Lesson Plan
Definitions ? • Assessment ? • Feedback • Evaluation ?
Assessment : definition “The processes and instruments applied to measure the learner’s achievements, normally after they have worked through a learning programme of one sort or another” Mohanna K et al (2004) Teaching Made Easy – a manual for health professionals
Feedback : definition “Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance” Van de Ridder JM et al (2008) Med Educ 42(2): 189
Evaluation : definition “A systematic approach to the collection, analysis and interpretation of information about any aspect of the conceptualisation, design, implementation and utility of educational programmes” Mohanna K et al (2004) Teaching Made Easy – a manual for health professionals
Part 1 Assessment
In this section: • Purposes of assessment • Miller’s pyramid • The utility function
Why assess ? 1 of 2 • To inform students of strengths and weaknesses. • To ensure adequate progress has been made before students move to the next level. • To provide certification of a standard of performance.
Why assess ? 2 of 2 • To indicate to students which parts of the curriculum are considered important. • To select for a course or career. • To motivate students in their studies. • To measure the effectiveness of teaching and to identify weaknesses in the curriculum.
Summative Formative
Clinical Education : Assessment Methods • Written Assessments • Observed clinical practice • Others : • Vivas • Portfolios • …
How a skill is acquired • Cognitive phase • Fixative phase • Practice • Feedback • Autonomous phase Fitts P & Posner M (1967) Human Performance
Does Shows how Knows how Knows Miller GE (1990) Acad Med (Suppl) 65 : S63
Clinical Work Observed ACAT, CbD, CeX, Does OSLER Shows how OSCE Short Answer-Reasoning Knows how Written Exams Knows MCQ Miller GE (1990) Acad Med (Suppl) 65 : S63
Question : How can we tell whether these tests are any good or not ? Answer : We do the maths .
Utility function • U = Utility • R = Reliability • V = Validity • E = Educational impact • A = Acceptability • C = Cost • W = Weight U = wrR x wvV x weE x waA x wcC Van der Vleuten, CPM (1996) Advances in Health Sciences Education 1, 41-67.
The Assessment Pentagram Validity Reliability Acceptability Feasibility Educational Impact
Validity & reliability • Validity : the extent to which the competence that the test claims to measure is actually being measured. • Reliability : the extent to which a test yields reproducible results. Schuwirth & van der Vleuten (2006) How to design a useful test : the principles of assessment
Validity : another definition “The degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment.” Messick (1994) Educational Researcher 23 : 13
Some causes of low validity • Vague or misleading instructions to candidates. • Inappropriate or overcomplicated wording. • Too few test items. • Insufficient time. • Inappropriate content. • Items too easy or too difficult. McAleer (2005) Choosing Assessment Instruments
Some causes of low reliability • Inadequate sampling. • Lack of objectivity in scoring. • Environmental factors. • Processing errors. • Classification errors. • Generalisation errors. • Examiner bias. McAleer (2005) Choosing Assessment Instruments
Types of validity • Face • Predictive • Concurrent • Content • Construct
“The examination fairly and accurately assessed the candidates’ ability”
Problem : Appearances can be deceptive.
Types of reliability • Test-retest • Equivalent forms • Split-half • Interrater and intrarater
The Assessment Pentagram Validity Reliability Acceptability Feasibility Educational Impact
Utility function • U = Utility • R = Reliability • V = Validity • E = Educational impact • A = Acceptability • C = Cost • W = Weight U = wrR x wvV x weE x waA x wcC Van der Vleuten, CPM (1996) Advances in Health Sciences Education 1, 41-67.
Does Shows how Knows how Knows Miller GE (1990) Acad Med (Suppl) 65 : S63
Clinical Work Observed ACAT, CbD, CeX, Does OSLER Shows how OSCE Short Answer-Reasoning Knows how Written Exams Knows MCQ Miller GE (1990) Acad Med (Suppl) 65 : S63
FY Workplace Assessment • Mini-CEX (from USA): Clinical Examination • DOPS (developed by RCP): Direct Observation of Procedural Skills • CBD (based on GMC performance procedures): Case-based Discussion • MSF (from industry): Multi-Source Feedback Carr (2006) Postgrad Med J 82: 576
Educational interventions • How will we assess? • How will we feedback? • How will we evaluate?
Educational interventions • Communication skills for cancer specialists • 2nd year medical speciality training • Medical humanities SSM for medical students • Masters-level pharmacology module • Procedural skills for medical students • Clinical Officers: ETATMBA
The ideal assessment instrument : • Totally valid. • Perfectly reliable. • Entirely feasible. • Wholly acceptable. • Huge educational impact.
Part 2 Feedback
Feedback : definition “Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance” Van de Ridder JM et al (2008) Med Educ 42(2): 189