1 / 48

Integrated Assessment Strategies for Effective Education

Explore the six degrees of integration in educational assessment focusing on formative and summative methods, quality, validity, reliability, format choices, continuous versus one-off assessments, and the importance of authority and locus. Understand the essential functions, quality measures, formats, scopes, authority sources, and loci behind effective educational assessments. Dive into the concepts of reliability, validity, classification consistency, and the modern interpretations of assessment quality. Discover the importance of reliability in ensuring validity and the impact of various assessment elements on overall assessment quality.

egglestonf
Download Presentation

Integrated Assessment Strategies for Effective Education

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Six degrees of integration: an agenda for joined-up assessmentDylan Wiliamwww.dylanwiliam.net Annual Conference of the Chartered Institute of Educational Assessors, London: 23 April 2008

  2. Overview • Six degrees of integration • Function • Formative versus summative • Quality • Validity versus reliability • Format • Multiple-choice versus constructed response • Scope • Continuous versus one-off • Authority • Teacher-produced versus expert-produced • Locus • School-based versus externally marked

  3. FunctionQualityFormatScopeAuthorityLocus

  4. A statement of the blindlingly obvious • You can’t work out how good something is until you know what it’s intended to do… • Function, then quality

  5. Formative and summative • Descriptions of • Instruments • Purposes • Functions An assessment functions formatively when evidence about student achievement elicited by the assessment is interpreted and used to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of that evidence.

  6. Gresham’s law and assessment • Usually (incorrectly) stated as “Bad money drives out good” • “The essential condition for Gresham's Law to operate is that there must be two (or more) kinds of money which are of equivalent value for some purposes and of different value for others” (Mundell, 1998) • The parallel for assessment: Summative drives out formative • The most that summative assessment (more properly, assessment designed to serve a summative function) can do is keep out of the way

  7. FunctionQualityFormatScopeAuthorityLocus

  8. Reliability • Reliability is a measure of the stability of assessment outcomes under changes in things that (we think) shouldn’t make a difference, such as • marker/rater • occasion • item selection

  9. Test length and reliability Just about the only way to increase the reliability of a test is to make it longer, or narrower (which amounts to the same thing). From To

  10. Reliability is not what we really want • Take a test which is known to have a reliability of around 0.90 for a particular group of students. • Administer the test to the group of students and score it • Give each student a random script rather than their own • Record the scores assigned to each student • What is the reliability of the scores assigned in this way? • 0.10 • 0.30 • 0.50 • 0.70 • 0.90

  11. Reliability v consistency • Classical measures of reliability • are meaningful only for groups • are designed for continuous measures • Marks versus grades • Scores suffer from spurious accuracy • Grades suffer from spurious precision • Classification consistency • A more technically appropriate measure of the reliability of assessment • Closer to the intuitive meaning of reliability

  12. Reliability & classification consistency • Classification consistency of National Curriculum Assessment in England

  13. Validity • Traditional definition: a property of assessments • A test is valid to the extent that it assesses what it purports to assess • Key properties (content validity) • Relevance • Representativeness • Fallacies • Two tests with the same name assess the same thing • Two tests with different names assess different things • A test valid for one group is valid for all groups

  14. Trinitarian doctrines of validity • Content validity • Criterion-related validity • Concurrent validity • Predictive validity • Construct validity

  15. Validity • Validity is a property of inferences, not of assessments • “One validates, not a test, but an interpretation of data arising from a specified procedure” (Cronbach, 1971; emphasis in original) • The phrase “A valid test” is therefore a category error (like “A happy rock”) • No such thing as a valid (or indeed invalid) assessment • No such thing as a biased assessment • Reliability is a pre-requisite for validity • Talking about “reliability and validity” is like talking about “swallows and birds” • Validity includes reliability

  16. Modern conceptions of validity “Validity is an integrative evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” (Messick, 1989 p. 13) • Validity subsumes all aspects of assessment quality • Reliability • Representativeness (content coverage) • Relevance • Predictiveness • But not impact (Popham: right concern, wrong concept)

  17. Consequential validity? No such thing! • As has been stressed several times already, it is not that adverse social consequences of test use render the use invalid, but, rather, that adverse social consequences should not be attributable to any source of test invalidity such as construct-irrelevant variance. If the adverse social consequences are empirically traceable to sources of test invalidity, then the validity of the test use is jeopardized. If the social consequences cannot be so traced—or if the validation process can discount sources of test invalidity as the likely determinants, or at least render them less plausible—then the validity of the test use is not overturned. Adverse social consequences associated with valid test interpretation and use may implicate the attributes validly assessed, to be sure, as they function under the existing social conditions of the applied setting, but they are not in themselves indicative of invalidity.(Messick, 1989, p. 88-89)

  18. Threats to validity • Inadequate reliability • Construct-irrelevant variance • Differences in scores are caused, in part, by differences not relevant to the construct of interest • The assessment assesses things it shouldn’t • The assessment is “too big” • Construct under-representation • Differences in the construct are not reflected in scores • The assessment doesn’t assess things it should • The assessment is “too small” • With clear construct definition all of these are technical—not value—issues • But they interact strongly…

  19. School effectiveness • Do differences in exam results support inferences about school quality? • Key issues: • Value-added • Sensitivity to instruction • Learning is slower than generally assumed • Sensitivity to instruction of tests is exacerbated by test-construction procedures • Result: invalid attributions about the effects of schooling

  20. Learning is hard and slow… 860+570=? Source: Leverhulme Numeracy Research Programme

  21. Why does this matter? • In England, school-level effects account for only 7% of the variability in GCSE scores. • In terms of value-added, there is no statistically significant difference between the middle 80 percent of English secondary schools • Correlation between teacher quality and student progress is low: • Average cohort progress: 0.3 sd per year • Good teachers (+1 sd) produce 0.4 sd per year • Poor teachers (-1 sd) produce 0.2 sd per year

  22. So… • Although teacher quality is the single most important determinant of student progress… • …the effect is small compared to the accumulated achievement over the course of a learner’s education… • …so that inferences that school outcomes are indications of the contributions made by the school are unlikely to be valid.

  23. FunctionQualityFormatScopeAuthorityLocus

  24. Item formats • “No assessment technique has been rubbished quite like multiple choice, unless it be graphology” Wood, 1991, p. 32) • Myths about multiple-choice items • They are biased against females • They assess only candidates’ ability to spot or guess • They test only lower-order skills

  25. Comparing like with like… • Constructed-response items • Can be improved through guidance to markers • Can be developed cheaply, but are expensive to score • For a one-hour year-cohort assessment in England • Development: £5 000 • Scoring: £1 000 000 • Multiple-choice items • Cannot be improved through guidance to markers • Can be developed cheaply, but are cheap to score • For a one-hour year-cohort assessment in England • Development: £1 000 000? • Scoring: £5 000

  26. Mathematics 1 • What is the median for the following data set? • 38 74 22 44 96 22 19 53 • 22 • 38 and 44 • 41 • 46 • 77 • This data set has no median

  27. Mathematics 2 • What can you say about the means of the following two data sets? • Set 1: 10 12 13 15 • Set 2: 10 12 13 15 0 • The two sets have the same mean. • The two sets have different means. • It depends on whether you choose to count the zero.

  28. Mathematics 3 Which of the shapes below contains a dotted line that is also a diagonal?

  29. Science • The ball sitting on the table is not moving. It is not moving because: • no forces are pushing or pulling on the ball. • gravity is pulling down, but the table is in the way. • the table pushes up with the same force that gravity pulls down • gravity is holding it onto the table. • there is a force inside the ball keeping it from rolling off the table Wilson & Draney, 2004

  30. Science 2 • You look outside and notice a very gentle rain. Suddenly, it starts raining harder. What happened? • A cloud bumped into the cloud that was only making a little rain. • A bigger hole opened in the cloud, releasing more rain. • A different cloud, with more rain, moved into the area. • The wind started to push more water out of the clouds.

  31. Science 3 • Jenna put a glass of cold water outside on a warm day. After a while, she could see small droplets on the outside of the glass. Why was this? • The air molecules around the glass condensed to form droplets of liquid • The water vapor in the air near the cold glass condensed to form droplets of liquid water • Water soaked through invisible holes in the glass to form droplets of water on the outside of the glass • The cold glass causes oxygen in the air to become water

  32. Science 4 • How could you increase the temperature of boiling water? • Add more heat. • Stir it constantly. • Add more water. • You can’t increase the temperature of boiling water.

  33. Science 5 • What can we do to preserve the ozone layer? • Reduce the amount of carbon dioxide produced by cars and factories • Reduce the greenhouse effect • Stop cutting down the rainforests • Limit the numbers of cars that can be used when the level of ozone is high • Properly dispose of air-conditioners and fridges

  34. English • Where would be the best place to begin a new paragraph? No rules are carved in stone dictating how long a paragraph should be. However, for argumentative essays, a good rule of thumb is that, if your paragraph is shorter than five or six good, substantial sentences, then you should reexamine it to make sure that you've developed the ideas fully. A Do not look at that rule of thumb, however, as hard and fast. It is simply a general guideline that may not fit some paragraphs. B A paragraph should be long enough to do justice to the main idea of the paragraph. Sometimes a paragraph may be short; sometimes it will be long.  C On the other hand, if your paragraph runs on to a page or longer, you should probably reexamine its coherence to make sure that you are sticking to only one main topic. Perhaps you can find subtopics that merit their own paragraphs. D Think more about the unity, coherence, and development of a paragraph than the basic length. E If you are worried that a paragraph is too short, then it probably lacks sufficient development. If you are worried that a paragraph is too long, then you may have rambled on to topics other than the one stated in your topic sentence.

  35. English 2 • In a piece of persuasive writing, which of these would be the best thesis statement? • The typical TV show has 9 violent incidents • There is a lot of violence on TV • The amount of violence on TV should be reduced • Some programs are more violent than others • Violence is included in programs to boost ratings • Violence on TV is interesting • I don’t like the violence on TV • The essay I am going to write is about violence on TV

  36. History • Why are historians concerned with bias when analyzing sources? • People can never be trusted to tell the truth • People deliberately leave out important details • People are only able to provide meaningful information if they experienced an event firsthand • People interpret the same event in different ways, according to their experience • People are unaware of the motivations for their actions • People get confused about sequences of events

  37. FunctionQualityFormatScopeAuthorityLocus

  38. The Lake Wobegon effect revisited • “All the women are strong, all the men are good-looking, and all the children are above average.” Garrison Keillor

  39. Effects of narrow assessment • Incentives to teach to the test • Focus on some subjects at the expense of others • Focus on some aspects of a subject at the expense of others • Focus on some students at the expense of others (“bubble” students) • Consequences • Learning that is • Narrow • Shallow • Transient

  40. FunctionQualityFormatScopeAuthorityLocus

  41. Authority • Reliability requires random sampling from the domain of interest • Increasing reliability requires increasing the size of the sample • Using teacher assessment in certification is attractive: • Increases reliability (increased test time) • Increases validity (addresses aspects of construct under-representation) • But problematic • Lack of trust (“Fox guarding the hen house”) • Problems of biased inferences (construct-irrelevant variance) • Can introduce new kinds of construct under-representation

  42. FunctionQualityFormatScopeAuthorityLocus

  43. Locus • Using external markers to mark student assessments involves spending more money in order to deny teachers professional learning opportunities • Getting teachers involved in “common assessment” • Is not assessment for learning, nor formative assessment • But it is valuable, perhaps even essential, professional development

  44. Final reflections

  45. The challenge • To design an assessment system that is: • Distributed • So that evidence collection is not undertaken entirely at the end • Synoptic • So that learning has to accumulate • Extensive • So that all important aspects are covered (breadth and depth) • Manageable • So that costs are proportionate to benefits • Trusted • So that stakeholders have faith in the outcomes

  46. Constraints and affordances • Beliefs about what constitutes learning; • Beliefs in the reliability and validity of the results of various tools; • A preference for and trust in numerical data, with bias towards a single number; • Trust in the judgments and integrity of the teaching profession; • Belief in the value of competition between students; • Belief in the value of competition between schools; • Belief that test results measure school effectiveness; • Fear of national economic decline and education’s role in this; • Belief that the key to schools’ effectiveness is strong top-down management;

  47. The minimal take-aways… • No such thing as a summative assessment • No such thing as a reliable test • No such thing as a valid test • No such thing as a biased test • “Validity including reliability”

More Related