330 likes | 488 Views
The use of a computerized automated feedback system. Trevor Barker Dept. Computer Science. Contents. Feedback considerations Approaches to feedback Automated feedback Previous research Examples Discussion. Chickering and Gamson’s Seven Principles: Good practice in higher education….
E N D
The use of a computerized automated feedback system Trevor Barker Dept. Computer Science
Contents • Feedback considerations • Approaches to feedback • Automated feedback • Previous research • Examples • Discussion
Chickering and Gamson’s Seven Principles: Good practice in higher education… • Encourages contact between students and lecturers • Develops reciprocity and cooperation among students • Encourages active learning • Gives prompt feedback • Emphasises time on task • Communicates high expectations • Respects diverse talents and ways of learning
Gives prompt feedback • Feedback must be prompt but it must also be good i.e. • Appropriate • Useful • Accurate • Individual • Fast • Facilitate feed forward
Reasons for automated approaches to testing and learning • Vast investment in infrastructure • Availability of MLE systems such as UH Studynet • Changes in nature of Higher Education • Online and distance education • Increase in student numbers (SSR) • Increasing pressures on time and cost
Previous researchComputer-Adaptive Test • Based on Item Response Theory (IRT) • If a student answers a question correctly, the estimate of his/her ability is raised and a more difficult question is presented • If a student answers a question incorrectly, the estimate of his/her ability is lowered and an easier question follows
Previous researchComputer Adaptive Testing • Computer-Based Tests (CBTs) mimic aspects of a paper-and-pencil test • Accuracy and speed of marking • Predefined set of questions presented to all participants and thus questions are not tailored for each individual student • Computer-Adaptive Tests (CATs) mimic aspects of an oral interview • Accuracy and speed of marking • Questions are dynamically selected according to student performance
Benefits of the adaptive approach • Questions that are too easy or too difficult are likely to • Be de-motivating • Provide little or no valuable information about student knowledge • The CAT level identifies a unique boundary between what the student knows and what he or she does not know
Providing individual feedback based on CAT. • An application of the CAT approach is in the provision of automated individual feedback • This approach has been in operation for several years at the University of Hertfordshire in two BSc. Computer Science modules • Recently this model has been extended to make it easier to use on other modules
About the Feedback • Learners received feedback on: • Overall proficiency level; • Performance in each topic; • Recommended topics for revision • Cognitive level (Bloom) • Feedback on assessment performance was initially made available to learners via a web-based application
Results: tutors’ opinions • Tutors consider that the fast feedback provided by a CAT is as good as or better than that currently provided in many cases. • The link to Bloom’s levels was positive • The approach was considered to be efficient, possibly freeing time for other activities • CAT considered to be best as a formative tool, rather than for summative assessment • Some tutors were concerned that the approach was ‘impersonal’ • There is a need for a monitoring role for tutors, for practical and ethical reasons
Recent research • The CAT automated feedback system has been extended from objective testing to include written and practical tests • Testing and evaluation of the new system with approximately • 350 first yearBSc (1 final practical test), • 120 second year BSc(2 written and practical tests) and • 80 final year BSc (2 final practical tests) • 70 MSc students ( 2 written tests)
Added features • Markers able to comment on the completeness of the hand-in • In this version, the hand-in information is presented to the marker who may then make additional comments on the completeness or nature of the hand-in. • Feedback was determined by the system based on the mark awarded in each section of a question, reading it from the database file for the assignment. • After all the question sections had been marked, the system presented a final summary screen so that the marker could check that the marks had been awarded accurately. • The marker can add additional feedback at the end
Results • Student attitude to feedback was good irrespective of score on test • Useful • Fair • Convenient • Quantity • Quality • Internal moderator happy with feedback • Suggestions from moderator were included in the next prototype
Modifications • Easy to set up feedback database automatically • Tutors can modify and add to feedback for each question • Additions to feedback saved for re-use later
In summary • Larger class sizes, greater use of online and distance assessment ensures that feedback is often too slow and too general to be of any real use to learners. • Personalised automated feedback is likely to become increasingly important in the future. It is being used in four modules currently at UH. • Learners and tutors accept the need for automated feedback and most appreciate the benefits of such systems. • The system is being further developed to make it simpler for general use