1 / 23

Some lessons from researching practical science summative assessment Ros Roberts

Some lessons from researching practical science summative assessment Ros Roberts STEM 2010: Beyond the Classroom. Cambridge Botanical Gardens, 17 th May 2010. Starting points: 1. Summative assessment structure should encourage fieldwork in teaching including:

jkiefer
Download Presentation

Some lessons from researching practical science summative assessment Ros Roberts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some lessons from researching practical science summative assessment • Ros Roberts • STEM 2010: Beyond the Classroom. • Cambridge Botanical Gardens, 17th May 2010

  2. Starting points: 1 • Summative assessment structure should encourage fieldwork in teaching including: • Observations – ‘seeing the world thro’ a biologist’s / scientist’s eyes’ • Solving problems / investigations • Assessment structures should not distort practice in T&L (c.f. ‘routinisation’ of very limited range of Sc1 coursework)

  3. Starting points: 2 • Assessment in high stakes summative assessment system must be reliable • Tensions between reliability and validity of performance assessment

  4. Assessing performance • Assessing ‘the ability to do an investigation out of the classroom’ • Via written accounts • Need up to 10 investigations to get reliable aggregate mark . Valid, reliable but impractical. • Therefore increase reliability but reduce validity in fewer assessments by: • restricting the type of investigation (which can then be practiced/routinised) • Not really assessing ‘doing’ but giving credit mainly for the substantive ideas (predictions, hypothesising, background, etc) which are also given credit in other parts of exam system. • [Research suggests that ‘ability of investigate’ is not directly correlated with other traditional measures of ability.]

  5. Alternatives? • Assessing ‘the ability to do an investigation out of the classroom’ • Can’t be done in mass assessment system • So, what’s the next best thing? • Could focus on the ideas employed when investigating

  6. Ideas in investigations e.g.factors affecting shrimp distribution

  7. What is to be assessed? Substantive ecological / biological ideas Concepts of evidence (the thinking behind the doing) Observation Skills – both written and practical

  8. Domain specification • i.e. What to teach and what to assess. • Clear articulation is essential for curriculum and assessment. • e.g. Domain Specification for Procedural Understanding • Concepts of Evidence ‘the thinking behind the doing’ • These are the ideas that are needed to develop a procedural understanding • Validated against ‘pure’ and ‘applied’ work-place science • A sub-set required for GCSE to enable pupils to solve practical problems and evaluate others’ evidence • Validated by teachers attending our development sessions • http://www.dur.ac.uk/rosalyn.roberts/Evidence/cofev.htm

  9. Starting points: 1 • Summative assessment structure should encourage fieldwork in teaching including: • Observations – ‘seeing the world thro’ a biologist’s / scientist’s eyes’ • Solving problems / investigations • Assessment structures should not distort practice in T&L (c.f. ‘routinisation’ of very limited range of Sc1 coursework)

  10. Bring own data: • What did you investigate? • Pick one example from your investigation where you were careful to make your design valid. Describe what you did, and explain how and why it helped to make the design valid. • Pick one example from your investigation where you made a series of repeat measurements - to take an average. Copy the series down. Explain how you decided that you had done enough measurements so that you could believe that the average was reliable. • How did you present your data so that patterns could be easily identified? Explain why you chose this method. • What pattern did you observe from the data you generated? Did all the data fit with the pattern; if not, do you have any idea why? • How did the evidence you obtained from the investigation relate to the original questions you were attempting to solve? • How much do you trust the data you obtained? Explain your answer. • Justify your choice of instruments in the investigation.

  11. e.g. Questions

  12. Possible response Possible response

  13. Summary • Valid and reliable summative assessment in mass system is not possible • Assessment should encourage fieldwork teaching and not distort practice • Assessing ‘ideas’ that have been taught (inc with fieldwork) will be more reliable and, if specified well in the curriculum and assessment, has validity

More Related