340 likes | 412 Views
From student to lecturer: 20 years of research on assessment as a timeline. Lin Norton & Bill Norton Liverpool Hope University. Acknowledgments. Research studies funded by: Liverpool Hope Assessment Plus (FDTL4 consortium project ) Write Now (HEFCE funded CETL). The power of assessment….
E N D
From student to lecturer: 20 years of research on assessment as a timeline Lin Norton & Bill Norton Liverpool Hope University
Acknowledgments • Research studies funded by: • Liverpool Hope • Assessment Plus (FDTL4 consortium project) • Write Now (HEFCE funded CETL)
The power of assessment… ‘Improving student learning implies improving the assessment system. Teachers often assume that it is their teaching that directs student learning. In practice, assessment directs student learning, because it is the assessment system that defines what is worth learning’. (Havnes, 2004, p. 1) ‘Authentic assessment can be defined as assessment that is pedagogically appropriate- it frames students’ views of HE, it has a major influence on their learning and it directs their attention to what is important’ (Boud & Falchikov, 2007)
...and the problems Assessment and feedback is seen across the UK sector as problematic: • View that assessment in HE manifests many poor practices (Boud, 1995; Rust, 2007) • Need for more consistency in assessment practice, effective and open use of assessment criteria and better use of feedback to promote learning (QAA 2006; 2008) • Students are least satisfied with assessment and feedback (NSS, 2005-2011) • ‘Students can, with difficulty, escape from the effects of poor teaching, they cannot, by definition if they want to graduate, escape the effects of poor assessment' (Boud, 1995).
How the typical UK university studenthas changed Times Higher Education Supplement 21 April 2006
Pressures on universities • Highly competitive global market; ( Bologna process; Leitch, 2006) • Widening participation (Dearing report, 1997) • Emphasis on employability (HEFCE, 2010) • Shrinking resources (Browne, 2010) • League tables and student satisfaction measures- National Student Survey (NSS) • Teaching Quality Information (TQI) which gives students an informed choice
Why a timeline? • We will argue that assessment is a phenomenon continually influenced by external movements (e.gAfL, NSS, QAA) which will go on developing. • We will show how the course of research isn’t always driven by external factors • We will suggest that the perspective of the key stakeholder (tutor or student) is an important focus and show how our own research has taken us from student to tutor (and back again?) - Two worlds • But, in constructing our presentation doubts set in about the usefulness of a timeline….
Reflecting on our assessment research • Looking back at our past studies with hindsight and with knowledge of our later research has enriched our interpretation but means we can group our results in many different ways…. • This presentation will be largely thematic (and as chronological as possible) • The timeline concept is to indicate some forward progression and building on our growing understandings but with the realisation that the context is always changing and there can be no end point Unlike the sat nav, in researching assessment ‘We have NOT reached our destination’
How it all began… ‘Mismatch’ between students and staff Norton, 1990
Two worlds? The students’ perspective The lecturers’ view
Theme 1. How do we help students write better essays? • Students don’t seem to realise what lecturers are looking for when marking their essays • Is it because we aren’t making our assessment criteria clear enough? • Notion of core assessment criteria: • ‘Core criteria are those that appear very commonly in assessment criteria across disciplines and institutions, and that appear to have a central role in the shared perception of what is important in a good student essay’ (Elander et al, 2004) • They are: Addressing the question Structuring the answer Demonstrating understandingDeveloping argumentUsing evidence Evaluating sources Use of language
Students’ understandings of the importance of assessment criteria ‘In the first year I didn’t really utilise the assessment criteria and therefore I didn’t know what the lecturers were looking for, it seems silly now not to look at the assessment criteria but I guess in the first year I just took it for granted that I would be told everything without having to actually do any independent thinking’ (‘David’, 3rdyear)
Students’ views about how staff use assessment criteria “…I find that the lecturers and the way that we are taught and explained to us about assessment it’s sketchy and all over the place for the undergraduates, this is what I am finding at the moment…” (‘Lana’, M.Sc & undergrad conversion diploma) simultaneously, Institution A)
Research interventions • Essay Feedback Checklist (Norton & Norton, 200; Norton et al, 2002) • Workshops (Harrington et al, 2006; Norton et al, 2005) • Book: Writing essays at university. A guide for students by students. (Norton & Pitt et al, 2009 • Research findings: Students wanted MORE guidance, MORE practice, MORE feedback
Do lecturers have shared understandings of core criteria? Q. What do you understand by ‘critical evaluation’? ‘at a ‘primitive’ level: ‘showing some emotion for what you’re doing’, ‘may even mean taking sides when you have very little evidence to support it’, display of personal and emotional involvement in what’s being studied, attempt to give other sides of argument’ (#7)g I f research carried out appropriately” (#2) ‘Tough to define …Perhaps a misnomer; evaluation includes potential for criticism, which ‘entails thinking about theory in relation to both evidence and other theories’ (e.g., does theory stand up in light of empirical evidence., does another theory do a better job of explaining the evidence?)’ (#11)
Do lecturers attach same weightings to the core criteria when marking? • Q. Are these criteria equally important? • ‘… using appropriate language, I don’t think is so important, I’m quite happy if they can criticize and evaluate. That said, it’s unusual to find someone who has the ability to critically evaluate but can’t write properly.’ [# 1]s mere • e or good.” ‘Addressing the question obviously. Use of evidence is really important in Psychology and that demonstrates their understanding. I’m less worried about structure in exam work because they’re under pressure. Developing an argument and critically evaluating are done very poorly by students.’ [#4] ‘I think they’re all as important.’ [#6]
Making assessment criteria transparent: What are the advantages for students? • Students do not have the same understandings as their lecturers, but make active efforts to improve their essay writing (Higgins et al, 2002) • Helps make criteria more explicit and understandable to students (, Elander & Hardman, 2002; O’Donovan, Price & Rust, 2001; Price, Rust & O’Donovan, 2003). • Makes the demands of the task clear • Enables meaningful feedback & opportunity to improve (Nicol & Macfarlane- Dick, 2006) • Meets Quality Assurance Agency (QAA) principles of equity, fairness and accountability
Making assessment criteria transparent: What are the advantages for lecturers? • Helps to reduce lack of inter-rater reliability on marks (Newstead & Dennis, 1994). • Mitigates order and practice effects, fatigue, & personal bias in marking. • Sounder overall judgment than intuitive ‘mental model’ (Elander, 2002) • Helpful for novice assessors. • Useful in defending judgments ( double marking; external examiners).
And what are the disadvantages?? • Paradoxical effect of encouraging a ‘mechanistic’ (Marton & Saljo, 1997) rather than independent & meaningful approach to learning (Norton, 2004). • The strategic student uses assessment criteria in an almost formulaic way to help her/him get the best mark possible- like Miller & Parlett’s (1974) cue-seeker • Lecturers often cannot agree on meanings and values attached to criteria • Defining criteria in terms of relating to marks results in vague statements like ‘excellent’, ‘good’, ‘adequate’, ‘poor’ • Lecturers have mental models of marking which are resistant to new guidelines in applying assessment criteria (Wolf, 1995) • Transparency itself is a contested notion (Orr, 2004)
Theme 2.Pressures and perceptions of ‘fairness’ • Student as ‘customer’ - strategic and marks orientated and heavily dependent on lecturers • Assessment can be perceived as unfair (Brunas- Wagstaff & Norton, 1998) • Assessment often not authentic (Gulikers et al, 2004) • Our research shows some of the effects of inauthentic assessment: • ‘Rules of the Game’ • Plagiarism all ‘commonly’ used • Cheating (Norton et al, 1996a.b; Norton et al, 2001)
Theme 3.Bringing the two worlds together • If assessment is perceived by students as unfair or inauthentic, how are ‘new’ lecturers introduced to what is currently accepted as desirable assessment practice? • Widely held view that assessment should be for rather than just of learning (Black ,2006) • but… is this what lecturers, particularly ‘new’ lecturers on a university teaching programme think and, if so, are they able to put their beliefs about assessment design into practice? • and… what part do these university teaching programmes have to play?
Interview study with 10 lecturers on PGCert in L&T • Q. Do you feel you have the freedom to change your assessment techniques easily? • Six said No, four said Yes (but with reservations). • Of those who said No: • ‘Not really because it is set in stone in the module proposal. You have to jump through many hoops if you are going to change the assessment techniques.’ (A) • ‘No! I get the impression that they are set in stone…. I think that hurdles of going to various panels to have your module changed puts people off…I get the impression from talking to colleagues that the process is long-winded and bureaucratic.’ (C)
Conclusions • Our findings suggest that what new academics learn on PGCert courses about assessment may be constrained when they attempt to put their new found pedagogical knowledge into practice. (Norton et al, 2010) • This may be because of a complex interaction of institutional, departmental and individual factors (Becher & Trowler, 2001; Fanghanel, 2007). • Led to our next studies on looking at desirable assessment practice and possible constraints ( Questionnaire with 586 ‘new’ lecturers and Interviews with 30 ‘experienced’ lecturers –Norton et al, 2011; Sahnnon et al, 2009).
Theme 4 Assessment Design: the lecturers’ perspective Main research questions : • What do new lecturers (recently qualified or qualifying) think about current assessment design practices as a result of having undergone a university teaching programme? • Would they think they were able to put into practice ‘desirable’ assessment features, or would they feel there were constraints that would make this difficult?
The Assessment Design Inventory Desirable practice (N=586) • I design my assessments to help students take responsibility for their own learning progress. 86% • In my practice I emphasise assessment for learning rather than assessment of learning. 75% • Involving students in the assessment design would encourage them to engage in the assessment task. 73% Constraints (N=586) • Changes to my assessment design are sometimes hindered by external factors (e.g. cost, high student numbers, time). 75% • There is little incentive for lecturers to innovate in their assessment practice. 61% • It is possible for students to ‘go through the motions’ to satisfy assessment requirements without learning anything. 57%
Assessment design: further findings • Desirable assessment design practice was more likely to be reported by lecturers who had obtained their HE teaching qualification, had over 8 years teaching experience, were female, taught in soft applied disciplines and who worked in polytechnic or modern universities. • Potential constraints to desirable assessment design practice were more likely to be reported by lecturers who had less than 8 years teaching experience, were male, taught in hard pure disciplines and who worked in traditional universities
Experienced lecturers’ learning and teaching orientations • From our interview study we have found two ‘orientations’ which we have tentatively identified as: • Professional: emphasis on skills and knowledge (N=6) ‘Learning is acquiring knowledge and hopefully an understanding of the context into which that knowledge can be put - so it isn’t purely an acquisition of knowledge process, you’ve got to be able to use that knowledge for it to be learning’ • Developmental:emphasis on skills, knowledge AND personal development (N=11): ‘An acquisition of knowledge, skills, techniques, an evolution, a maturing, a partnership, a process.’ Shannon et al , 2009
Do lecturers’ orientations affect how they view assessment design? • Lecturers with a professional orientation defined assessment in terms of criteria, learning objectives and learning outcomes: • Assessment characterised as both a standardised and standardising event, • the focus is on ensuring parity, that all students awarded a given grade should meet certain objective standards. • Lecturers with a developmental orientation take a more student centred approach by focussing on the student’s perspective of the assessment regime: • Assessment is seen as a range of tasks to suit a range of different learners
Do lecturers’ orientations affect how they view feedback? • Lecturers with a professional orientation saw feedback primarily as a post assessment activity, a means of propelling the student to better grades in the next assignment and a process in which the students are recipients of, rather than active participants in, the feedback cycle: ‘When a student has read a bit of feedback they should be able to work out from that feedback what they need to do differently, what they need to do to improve…’ • Lecturers with an experiential orientation conceptualised feedback less as a post assessment activity and more in terms of relationship building, a continual activity: ‘…there needs to be a chance for students to actually feedback on the feedback, to come back and say actually I didn’t agree with that or what did you mean by that but, again, that’s part of the whole process’.
Theme 5.Disciplinary views on assessment methods, marking and feedback Assessment, Marking & Feedback Inventory (AMFI) study with 45 lecturers (Hope): • Some indications that hard appliedlecturers were ‘more traditional’ in their approach to marking and feedback, they were less keen to spend time on it and used fewer assessment methods than lecturers from the other two disciplines • This lends some support to our earlier finding which showed hard appliedlecturers were more likely to be constrained in their assessment design. (Norton et al, 2011)
From discipline to revisiting orientation • New revised and developed version of AMFI together with ADI piloted on 30 lecturers at Hope • Two statements devised to identify their ‘orientations’: • Professional: Assessment is primarily about upholding standards to feed into the professional/vocational role that students may fill. Feedback is mainly post assessment and initiated to help students improve grades • Developmental: Assessment is mainly about encouraging students’ personal growth and development. Feedback is on-going and involves a two way relationship involving dialogue over time
Preliminary AMFI findings Some small significant differences: • lecturers with a professional orientation more likely to: • see QAA requirements as a restriction • lecturers with a developmental orientation more likely to agree that: • new assessment methods are needed • there's little incentive to innovate. • students focussing on grades more than on learning is a failure of the educational system
More complex than a timeline? Is it time to go back to the students?