490 likes | 564 Views
Helping students find what they need: an intuitive ‘sliding scale’ design for formative CMA question sequences. Ingrid Nix Lecturer in Teaching and Learning (FHSC) CETL COLMSCT Teaching Fellow (co-author of the LINA project with Ali Wyllie) EATING talk, 15 November 2007. Outline.
E N D
Helping students find what they need: an intuitive ‘sliding scale’ design for formative CMA question sequences Ingrid Nix Lecturer in Teaching and Learning (FHSC) CETL COLMSCT Teaching Fellow (co-author of the LINA project with Ali Wyllie) EATING talk, 15 November 2007
Outline • Overview of LINA (Learning with Interactive Assessment) aims • Some existing CMA examples – the context • What is the LINA ‘sliding scale continuum’? • Selection of tentative early data analysis • Next steps
LINA aims – areas of interest e.g. How do students want Feedback displayed? How to make Qs more motivating (e.g. using audio, animations)? How to share system information on choices made & provide a tool to reflect? How to optimise student choice (e.g. more effective use of study time)? How to encourage reflection and evidence-based practice?
Today’s questions • Can using a continuum bring any benefits to students? • Can you see it being of relevance in your own subject area?
A typical journey from formative to summative using OpenMark • A K216 student is being assessed on ICT skills • They try out formative CMAs • https://students.open.ac.uk/openmark/k216.cmapractice2/ • When ready, they access summative CMAs • http://kestrel.open.ac.uk/om-tn/k216-07b.cma42/ • Examples of standard OpenMark interface follow:
Current characteristics of ad hoc Qs. • Surprise • Challenge • Unknown elements • Inability to gauge whether need to access
Inserting sequences • Can create links between questions to build narrative and aid students • Create impact and memorability • Scenario on which to hang learning experiences • Build pathway supporting learning acquisition
Standard OpenMark menu of options If we’re going to use sequences, can we improve students’ knowledge or expectation of what lies behind a question?
What is the sliding scale continuum? • Description to students: • Scenario-based, linked questions • From left to right moves from ‘easier’ to ‘more difficult’ • Can travel along it in any direction, revisit, skip Qs. • Preparation for summative test
PPP continuum approach: from theory to applying a skill (EFL language teaching)
Mapping PPP against other teaching approaches e.g. Bloom’s revised taxonomy • Anderson and Krathwohl (2001)http://www.odu.edu/educ/llschult/blooms_taxonomy.htm
Demo of LINA project • http://learn.open.ac.uk/file.php/1811/lina_030907/LINAhomek113.htm • (from K113 course website)
Scenario introduced with choice of media • Audio/ animation and transcripts • Ability to select based on preference of learning approach
Confidence indicator tool • Set tool before submit answer • Self-assess their confidence that the answer will be correct • Marks vary according to degree of confidence vs whether correct or not
Learning log for reflections • At any point can open learning log to refer to • Prompted at end of each question, after feedback, to enter current reflections
Preparing for summative • At end of formative sequence, question types encourage transferring knowledge into other contexts
Preparing for summative • At end of formative sequence, question types encourage transferring knowledge into other contexts
Online feedback: additional information • At end of summative sequence, invite Feedback on project areas of interest e.g. use of media, continuum, confidence indicator tool, etc.
Feedback form responses recorded in Learning Log (for our analysis later)
The process of gathering data • LINA website made available online to K216 (and later K113) currently using standard OpenMark iCMAs • Optional invitation to ‘have a go’ and feedback online (5 responses) • If interested, invited to come & be videoed (IET userlab) (2 students respond from those above, 1 later cancelled) • To increase data collection: • 2 colleagues (course managers) invited to IET userlab (no previous experience of iCMAs) • Total respondents providing data: 7 people
Feedback: quantitative & qualitative • 1. Learning Log: While working online the system records student pathway and actions; presents these in a log (for students to refer to (and for us to refer to afterwards) • Learning Log text input: students can type in additional text reflections during activity (saved in the log) • 2. Feedback form: After each person completes a topic online we ask their views in an attached online Feedback form (one per topic) • Responses are based on 7 people who completed one or both topics (1-2 repetitions of a topic each = 1 to 4 feedback opportunities)
Feedback: quantitative & qualitative • 3. IET Userlab: Collected data from 3 recorded sessions • Learning Log/report tracking pathway, selections made; + text input for reflections • Video of screen actions, + audio recording of commentary speaking aloud their actions; + additional reflections • Feedback forms completed after each topic • Audio interviews on completion of the 2 topics covering issues emerging from observations
Quantitative: feedback form analysis • Data collected is from: • some people who returned 1 Feedback form • or some people who returned 4 Feedback forms, (sometimes identical responses, sometimes changing their responses) • For the following analysis the last response to each question was taken as reflecting their current view • The following results therefore show a current snapshot, not the change in perception of respondents over time as they revisited the system
Various quotes [11.45]Instead of ‘You want to insert..’ she says ‘I want to insert an extra row..’ reads second sentence as is on screen including ‘do you need to place your cursor’. [4.27] For me that’s just a good way of checking [SB:transc] [7.54]Now [Recalls Qs from previous visit.] If I remember right, this one, when I did the original test I got wrong. [17.55] Right, now I’ve only noted 7 days across the top and 3 events down the side. So making my column up I’m just going to look at the audio transcript because I’m more of a visual person rather than doing that in my head. So I can check it here.
Various quotes [9.12]Because I never remember things I’ve heard I’m going to use the transcript as a crib sheet. [7.26]So I’ll do L1 to get an idea what level everything’s at. [15.10] I’m going to number 4 next because I still don’t feel confident that I know how the level of difficulty is increasing. What kind of scale it’s increasing at. [15.53] Depending on how difficult 4 is, I might just go for 6. Or 5. [17.10]Ok I’m confident that I’ll be able to do 6 correctly now. …..Because I think I know sort of how much difficult it’s getting all the time. [21.00]I used the transcript to check answers, therefore high confidence.
Next steps • Analysis of userlab video data (6 hours work to transcribe per topic) • Transcribe and analyse audio interviews • Explore other areas within project: • Use of media • Learning log • Confidence indicator tool • Feedback ? Do we need to extend our data collection
References • Anderson, L. & Krathwohl, D. (2001) A Taxonomy of Learning, Teaching and Assessing: A revision of Bloom’s Taxonomy of Educational Objectives. New York, Addison Wesley Longman
Try-out LINA • http://learn.open.ac.uk/file.php/1811/lina_030907/LINAhomek113.htm • Please complete both topics • + the Feedback form at the end of each topic • Insert your name & faculty (to indicate you are not a student tester) • Many thanks, • Ingrid and Ali • i.nix@open.ac.uk a.j.wyllie@open.ac.uk