1 / 34

Making sense of assessment complexity

Dive into the complex world of assessment with Dr. Lydia Arnold as she shares challenges, offers practical advice, and identifies areas for action. Explore considerations in forming assessments including alignment, fairness, inclusivity, and more. Discover solutions for setting the right level, addressing tacit assumptions, enhancing student feedback use, and promoting academic integrity. Join the discussion on assessment practices and explore convergent solutions for bridging theory and practice effectively.

bettym
Download Presentation

Making sense of assessment complexity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Making sense of assessment complexity Dr Lydia Arnold @HarperEdDev lydiaarnold.net larnold@harper-adams.ac.uk

  2. About me: Educational Developer, Head of eLearning & Principal Lecturer, Harper Adams University. Key roles: Curriculum, eLearning, PgC, internal assessment and feedback consultant, staff development, action research supervisor EdD, PFHEA, NTF Realist (or so I hope!) Read more lydiaarnold.net Follow @HarperEdDev

  3. This session: Share challenges around assessment Offer some convergent, pragmatic advice (tour of theory + practice) Identify areas for action* A ‘panoramic’lecture

  4. Considerations when forming assessments (all provide challenges): Learning outcomes alignment Fairness Level Reliability Inclusivity National and Institutional requirements Opportunities for feedback Authenticity Academic integrity Validity Quality assurance factors Manageability Verification/Accountability Cultural relevance Discipline traditions Variety Preparatory support Others?

  5. Context and the need for pragmatism Staff view is underrepresented (Arnold, 2017; Evans, 2013; Norton, Floyd and Norton, 2019; Bearman et al. 2016) Context and agency matters (Arnold, 2017) Contention growing around some of the beliefs that have been almost ubiquitously central to the anglo assessment landscape (see Carson, 2019; Worthen, 2018) Others share the concerns we hold …. See Brown (2014) on levelness, see Chan and Ho (2109) on what makes good assessment rubrics … More research is STILL needed ….

  6. Focus on specific challenges

  7. Level I never know if the assessments I am setting are at the right level. How do you tell? I looked at the standards document and thought, my goodness, we are no where near at the right level. Or is it just the way I am reading it? I spoke to Mark and he said what I was doing is fine. I am so confused.

  8. Tacit assumptions First person is OK, and they really should discuss how they collated their data. I don’t just want results. And those who are critical of the literature will get the best grades. I want my students to include at least ten references and they should write in third person … oh and they must include a summary in their report. • Polanyi highlighted the need for articulated understanding back in 1958! • O’Donavon, Price & Rust identified tacit knowledge as a problem in 2004 • Still it’s an issue today… see for example Boud & Carless (2018).

  9. Student use of feedback • Maybe consider: • Legibility • Volume • Tone • Lack of curriculum links • Focus • Alignment to criteria • Accessibility • Consistency • Opportunity to clarify • Timeliness • What is seen as feedback (product or process) “we as a department do spend a lot of time writing feedback. But what’s the point if they’re not going to use it, they moan that they don’t get it”. “he's not even bothered reading his feedback and it's probably the most important thing as lecturers we can give … So why aren’t they using that feedback? why don’t see it as being very valuable?” They just don’t use it !

  10. Academic integrity (plagiarism, essay mills etc.)

  11. Universality (inclusivity, accessibility) Othering (BTEC/A’Level; Disability; Culture; Preference) Reasonable adjustment Morals and Metrics Intersectionality and complexity Context specific othering

  12. A few convergent [partial] solutions, perhaps?

  13. Level – tools

  14. To make the theory work … Standardisation Teams for assessment Cross-teamworking Engage new staff Original source frameworks Iterative development Take a longitudinal view Analysis of grades and questions Explore surrounding levels Specific CPD Work with rubrics Get active with exemplars

  15. A change in feedback paradigm Winstone & Carless (2019): Timed to enable application to new tasks (it’s not just about turnaround times – consider timeliness) Actionable, implementable comments (not comment overload) Moving from an approach of delivering feedback to students seeking guidance (feedback as process and dialogue, not product) Interconnectedwithin a programme of students … Reimann, Saddler and Sambell (2019) Interconnectivity needed Disconnect between theory and practice Mismatch between dialogic and traditional

  16. Exemplars Examples of assessment used to facilitate the development of student performance I worry students will just copy Demotivating for weaker students Students will have their creativity stifled I’m unsure about what’s OK to share and what consent is needed I don’t have an example of ‘this’ assignment Don’t work for ‘science’ subjects Quotes paraphrased from (Pittson, 2018)

  17. Benefits Clarify assessment requirements (Orsmond, Merry and Reiling, 2002; Kean, 2012); Surface tacit assumptions about assessment (Chong, 2018; Scoles, Huxham and McArthur, 2014) Increase students’ self-direction (Grainger, Heck, and Carey, 2018); Prompt activities that generate feedback (Orsmond, Merry and Reiling, 2002); Bring about greater confidence in standards (Bamber, 2014); Pro-actively provide feedback in an efficient and timely manner (Scoles, Huxham and McArthur, 2014) Improve dialogue around assessment (Carless and Chan, 2017). Reduces anxiety Reduces housekeeping Helps students understand assessment Helps students get started Visualise different levels Encourage and inspire

  18. Rubrics Who uses a rubric? Many meanings: My kind of rubric (take care in discussions) According to Popham (1997 cited in Dawson 2017) rubrics must include: Evaluative criteria (the stuff that you measure) Quality definitions (what you need to do to achieve a certain outcome) Scoring strategy

  19. Is that the house rubric or would Sir like something more bespoke? (for a substantial review of rubrics see Dawson, 2017)

  20. Bringing rubrics to life

  21. Authentic assessment Providing assessment opportunities which are like tasks in the ‘real world’.

  22. Examples Video commissioned project Wildlife surveys Client consultation role play Social media strategy for local charity or start up business • Case study: • Rationale • Real clients • Managablilty • Tech help • Pedagogy of uncertainty • Criteria with headroom • https://www.youtube.com/watch?v=fcFJ19LyMVI&list=PLRKWnZs4l1bDuAvAE0UwmzeOL6EdZ06l8&index=4 • https://www.youtube.com/watch?v=JgrRCiW_OgU&t=4s

  23. A spectrum of opportunities for industry or community engagement

  24. Benefits Assessment FOR learning Motivating for students Develops a wide range of skills Interesting and rewarding for staff Helps the defence of HE against essay mills Challenges Risk of utilitarian narrative Time in preparation Not all students align with the career area Support needs to be aligned More challenging in non-vocational courses

  25. A note on universality Good design works for all students, not just sub groups

  26. Your take away?

  27. Source of images under Creative Commons Man Juggling https://www.publicdomainpictures.net/en/view-image.php?image=232490&picture=man-juggling-balls Elephant http://lh5.ggpht.com/_jhfurX4cT_4/Sjwrot_ia0I/AAAAAAAABNQ/f4cpwWJ7z4c/s1600-h/How-to-eat-elephant%5B3%5D Essay mill headlines https://www.theguardian.com/education/2019/mar/20/essay-mills-prey-on-vulnerable-students-lets-stamp-them-out Acknowledgements Helen Pittson Jane Headley And the HAU Exemplars Community of Practice

  28. Arnold, L. (2014) ‘Using technology for student feedback : Lecturer perspectives’, Thesis Submitted to University of Liverpool, (October), pp. 1–210. Bearman, M. et al. (2016) ‘Support for assessment practice: developing the Assessment Design Decisions Framework’, Teaching in Higher Education. Routledge, 21(5), pp. 545–556. doi: 10.1080/13562517.2016.1160217. Brown, S. (2014) ‘What are the perceived differences between assessing at Masters level and undergraduate level assessment? Some findings from an NTFS-funded project’, Innovations in Education and Teaching International. Routledge, 51(3), pp. 265–276. doi: 10.1080/14703297.2013.796713. Carless, D. and Boud, D. (2018) ‘The development of student feedback literacy: enabling uptake of feedback’, Assessment and Evaluation in Higher Education. Routledge, 43(8), pp. 1315–1325. doi: 10.1080/02602938.2018.1463354. Carson, J. T. (2019) ‘Blueprints of distress?: Why quality assurance frameworks and disciplinary education cannot sustain a 21st-century education’, Teaching in Higher Education, pp. 1–10. doi: 10.1080/13562517.2019.1602762. Chan, Z. and Ho, S. (2019) ‘Good and bad practices in rubrics: the perspectives of students and educators’, Assessment & Evaluation in Higher Education. Informa UK Limited, 44(4), pp. 533–545. doi: 10.1080/02602938.2018.1522528. Dawson, P. (2017) ‘Assessment rubrics: towards clearer and more replicable design, research and practice’, Assessment and Evaluation in Higher Education. Routledge, 42(3), pp. 347–360. doi: 10.1080/02602938.2015.1111294. Evans, C. (2013) ‘Making Sense of Assessment Feedback in Higher Education’, Review of Educational Research, 83(1), pp. 70–120. doi: 10.3102/0034654312474350. Norton, L., Floyd, S. and Norton, B. (2019) ‘Lecturers’ views of assessment design, marking and feedback in higher education: a case for professionalisation?’, Assessment & Evaluation in Higher Education, pp. 1–13. doi: 10.1080/02602938.2019.1592110. O’Donovan *, B., Price, M. and Rust, C. (2004) ‘Know what I mean? Enhancing student understanding of assessment standards and criteria’, Teaching in Higher Education. Informa UK Limited, 9(3), pp. 325–335. doi: 10.1080/1356251042000216642. Reimann, N., Sadler, I. and Sambell, K. (2019) ‘What’s in a word? Practices associated with “feedforward” in higher education’, Assessment and Evaluation in Higher Education. Routledge. doi: 10.1080/02602938.2019.1600655.

More Related