240 likes | 253 Views
Learn how confidence-based assessment is used as a study tool in biomedical education at UCL. Explore the benefits of interactive simulations, introspection, and active learning techniques.
E N D
Confidence-based assessment Tony Gardner-Medwin - Physiology, UCL context confidence assessment as a study tool confidence assessment in exams More info:- web site : www.ucl.ac.uk/~cusplap
INTROSPECTION AND ACTIVE LEARNING IN BIOMEDICAL STUDY Tony Gardner-Medwin In the CRUCIFORM The problems: Fewer staff, more students, less small group & practical teaching Rote learning: students focus on information, not understanding Poor introspection, concept manipulation, numeracy Some ways computers can help: Confidence-based marking to develop introspection - LAPT Life & Times of guess-who - an illustrative QUIZ Interactive simulations to develop visual intuition - LABVIEW Thinking in parallel - TALK (cf. DISCOURSE - see separate demo)
TALK & PAGER PAGER - Pops up messages onto students’ screens on the networkTALK - Show simultaneous student responses to the tutor/s (cf. DISCOURSE - commercial package) PAGE - Any new version of a text file pops up on top of students’ work. WATCH- up to 80 text messages visible simultaneously within a few secs. NETWORK - Everyone sees all messages within a few secs.
increasing nescience What is Knowledge? Knowledge depends on degree of belief, or confidence: knowledge uncertainty ignorance misconception delusion Knowledge depends on degree of belief, or confidence: • knowledge • uncertainty • ignorance • misconception • delusion =0 -log2(confidence*) for truth of a =1 true proposition >>1 Measurement of knowledge requires the eliciting of confidence (or *subjective probability) for the truth of correct statements. This requires a proper scheme of incentives
100% 80% C=3 60% Subjective 40% Expectation of Score C=2 20% C=1 0% 0.5 0.75 1 Subjective Probability LAPT confidence-based scoring scheme Confidence Level 1 2 3 Score if Correct 1 2 3 Score if incorrect 0 -2 -6 P(correct) < 67% >67% >80% Odds < 2:1 >2:1 >4:1
- evaluation next - basic principle
"How useful was confidence assessment?" 50% 40% 30% 20% 10% 0% Very Useful Not useful No Reply Useful at all Evaluation study (with K. Issroff) 136 replies (/210) after 1st yr medical course
How useful were the explanations? 60% 50% 40% 30% 20% 10% 0% No Very Useful Not Useful useful at Reply all
"I think about confidence assessment 50% 40% 30% 20% 10% 0% Every Time Most of the Rarely Never No reply time % "I sometimes change my answer while thinking about 30 confidence assessment" 25 20 15 10 5 0 Disagree 1 2 3 4 Agree 5
100% 5%, 95% 90% percentiles 80% % correct 70% 60% 50% i-c exF exM i-c exF exM i-c exF exM @ C=1 @ C=2 @ C=3 Discrimination performance - in-course & exam [331 medical students: 190 F, 141 M]
Principles that students seem readily to understand :- • both under- and over- confidence are impediments to learning • confident errors are far worse than acknowledged ignorance and are a wake-up call (-6!) to pay attention to explanations • expressing uncertainty when you are uncertain is a good thing • thinking about the basis and reliability of answers can help tie bits of knowledge together (to form “understanding”) • checking an answer and rereading the question are worthwhile • sound confidence judgement is a valued intellectual skill in every context, and one they can improve
- analysis of exam data - student evaluation
A problem with conventional scoring: • many answers are based on partial and uncertain knowledge • these contribute relatively little to the credit • - but a lot to the variance This is statistically inefficient Since we can identify the uncertain answers, we can assess the magnitude of this problem under exam conditions - 331 students, 500 True/False Questions
100% A. y = x1.67 a 80% equality (only expected for a pure mix of certain knowledge and total guesses) b 60% scores if uncertainty is homogeneous and correctly reported c confidence-based score 40% theoretical scores for homogeneous uncertainty, based on an information theoretic measure d 20% 0% 0% 20% 40% 60% 80% 100% (50% correct) simple score
Breakdown of credit and variance due to uncertainty Simple scores (scaled conventional scores) were scaled so chance gives 0%, total knowledge 100% (equivalent to +1 for correct, -1 for incorrect, 0 for omission). - 65% of the variance came from answers at C=1, but only 18% of the credit. Confidence scores: these give less weight to uncertain answers; uncertainty variance is then more in proportion to credit, and was reduced by 46% (relative to the variation of student marks)
Exam marks are determined by: 1. the student’s knowledge and skills in the subject area 2. the level of difficulty of the questions 3. chance factors in the way questions relate to details of the student’s knowledge 4. chance factors in the way uncertainties are resolved (luck) (1) = “signal” (its measurement is the object of the exam) (3,4) = “noise” (random factors obscuring the “signal”) Confidence-based marks improve the “signal-to-noise ratio” The most convincing test of this is to compare marks on one set of questions with marks for the same student on a different set . A good correlation means we are measuring something about the student, not just “noise”
No. Confidence scores are better than simple scores at predicting even the conventional scores on a different set of questions. This can only be because they are a statistically more efficient measure of knowledge. The correlation, across students, between scores on one set of questions and another is higher for confidence than for simple scores. But perhaps they are just measuring ability to handle confidence ?
How should one handle students with poor calibration? Significantly overconfident: 2 students (1%) e.g. 50% correct @C=1, 59%@C=2, 73%@C=3 Significantly underconfident: 41 students (14%) e.g. 83% correct @C=1, 89%@C=2, 99%@C=3 Maybe one shouldn’t penalise such students Adjusted confidence-based score: Mark the set of answers at each C level as if they were entered at the C level that gives the highest score. mean benefit = 1.5% ± 2.1% (median 0.6%)
100% A. a 80% b 60% c confidence-based score 40% d 20% 0% 0% 20% 40% 60% 80% 100% simple scaled score (50% correct) (100% correct) y = x1.67 equality (only expected for a pure mix of certain knowledge and total guesses) scores if uncertainty is homogeneous and correctly reported theoretical scores for homogeneous uncertainty, based on an information theoretic measure
simpleconfconf (adj) Signal / noise variance ratio: 2.8 5.3 4.3 Savings in no. of Qs required: - 48% 35%
SUMMARY CONCLUSIONS • Adjusted confidence scores seem the best scores to use (they don’t discriminate on the basis of the calibration of a person’s confidence judgements, and are also the best predictors of performance on a separate set of questions). • Reliable discrimination of student knowledge can be achieved with one third fewer questions, compared with conventional scoring. • Confidence scoring is not only fundamentally more fair (rewarding students who can correctly identify which answers are uncertain) but it is more efficient at measuring performance.