300 likes | 484 Views
Introductory Computer Programming, Problem Solving and Computer Assisted Assessment. Charlie Daly, DCU John Waldron, TCD. Preview: A Problem. A Programmable Robot. A Problem. Solving a maze. Program a Robot to Solve a maze Analyse a program that is supposed to solve a maze. But.
E N D
Introductory Computer Programming, Problem Solving andComputer Assisted Assessment Charlie Daly, DCU John Waldron, TCD
Preview: A Problem A Programmable Robot
A Problem • Solving a maze. • Program a Robot to Solve a maze • Analyse a program that is supposed to solve a maze
But ... • Anybody can check if the program works. • Create a maze and check if the robot can solve it. • And provide feedback if it doesn’t work • Show what the robot did.
So ... • Certain problems are very challenging ... • ... but they can be checked very simply • ... easy to provide useful feedback to faulty programs • End of Preview Mathematical proofs similar, easy to check, difficult to conceive
Talk Outline • Programming Courses are not working. • Programming ability is difficult to assess • The Solution: Proper Assessment • Implementation issues (software and peopleware) • Results • Conclusions
Programming Courses are not working • Students in Introductory Programming Courses do not learn to program! An international multi-institutional study of introductory programming courses ITiCSE 2001 “Do students in introductory computing courses know how to program at the expected skill level?”
What is Wrong? Assessment "The spirit and style of student assessment defines the de facto curriculum" “Assessing Students”, Derek Rowntree '77
What is Wrong? • It is difficult to assess programming ability in a traditional written exam. • Programming exercises are subject to plagiarism; a serious problem in introductory programming courses. • If you do not assess something, the students will not learn it.
What's wrong with Exams • Two sides of programming • language syntax (easy to examine) • problem solving (hard to examine) • Unoriginal (repeat) questions • Marks for attempting a question • Assuming insight where none exists • Objectivity One exam Lecturer doesn't want to fail whole class.
What's wrong with Assignments • Most students don't see a problem with using somebody else's code ("if I understand it it's OK") • Plagiarism is a huge problem. • Lecturers frequently know it's happening and do nothing. don't understand there's a difference between writing and understanding
The Bright Shining Lie • Lecturers think the students know how to program • they passed the exams • Students think they know how to program • they passed the exams • Students have an excuse: for students education is about passing exams
Talk Outline • Programming Courses are not working. • Programming ability is difficult to assess • The Solution: Proper Assessment • Implementation issues (software and peopleware) • Results • Conclusions
The Solution: Proper assessment • Come up with original challenging problems. • Mark properly; only give marks if the solution is completely correct. • Allow the student to get computer feedback. • (compiler errors, testing)programming is a process • Make students aware of the assessment!
Context: the course • Introductory Programming Course • one semester: 12 weeks of lectures • Students • 400 (300 pure Computing) • no prior programming experience • education ≡ passing exams • Exams • programming exams week 6 and 12 (Weighting 30%) • written exam week 16 (Weighting 40%)
RoboProf • Automated Program marker • WWW interface • Runs a student's program using different input and check that the program produces the correct output. • Student is shown the result and (if not correct) may modify and resubmit their program. more
The Programming Exam • 3 Questions (in increasing difficulty) • 2 hours (first half hour without computer) • Not open book, but may use a 'cheatsheet' • During the exam the students • Write a program • Submit it to RoboProf • View feedback • May resubmit without penalty
The Programming Exam • Students know their result at the end of the exam. • Fewer complaints • Greater insight into their ability • More likely to listen when subsequently shown a correct solution • General effect: formative assessment works.
RoboProf Vs Manual Exam • Standard CAA advantages • Objective • Fast Feedback • Avoids the problem of the manual marker interpreting the student solution • Models standard program development process (computer feedback) • Huge resource requirements (~400 PCs)
RoboProf Vs Manual Exam • Results: Not much difference. • Markers only awarded marks for programs that definitely looked like they might work. • Students knew that the marking system was similar to the programming exams.
Student Impressions "I liked the web-based test"
Student Impressions "The marking system was fair"
Comparison with previous years • There were too many simultaneous changes to the course to draw firm conclusions (programming language, number of students, lecturers, facilities) • Results of last three years • 1999: 30% • 2000: 20% • 2001: ?
Conclusions • CAA can evaluate problem solving ability • Using approriate problems • Programming problems have added advantages • Models the program development process. • Can show errors without showing the solution. • Avoids the inherent problems that humans have assessing programming problems
Future Work • Improve Feedback • Make it adaptable (can be used by other institutions)