150 likes | 163 Views
Explore three policy-focused procedures for determining instructional sensitivity; discussion on testing and educational research at AERA. Preliminary evidence indicates impact on age cohorts and curriculum exposure. Recommendations for standardized tests and accountability measures.
E N D
Three practical, policy-focused procedures for determining an accountability test’s instructional sensitivity III: An index of sensitivity to instruction Symposium at the 2007 meeting of the American Educational Research Association, Chicago, IL Dylan Wiliam,Institute of Education, University of London
Preliminary evidence • 6099 + 1 = ? (Foxman et al., 1980) • Correctly answered by some 7-year-olds • Incorrectly answered by some 14-year-olds • The “seven year gap” (Cockcroft, 1981) • Progression in measuring (Simon et al., 1995) • Spread of achievement in an age cohort apparently much greater than generally assumed
860+570=? Source: Leverhulme Numeracy Research Programme
Sequential tests of educational progress (ETS, 1957) TIMSS NAEP
Insensitivity to instruction • Artifact or reality? • Influenced by test construction procedures • Influenced by approaches to curriculum • Dimensions of progression • Reasoning power • Curriculum exposure • Maturity
A very simple model • Achievement age is normally distributed about chronological age, with a standard deviation proportional to the chronological age • Constant of proportionality varies from around one-sixth to one-half, depending on the kind of curriculum and assessment
Standardized tests SD=1/4 age SD=1/5 age SD=1/10 age
Two modest proposals • For accountability tests, design tests that maximize the difference between students who have, and have not, received adequate instruction • Mount a public education campaign to communicate the idea that learning is relatively insensitive to quality instruction