170 likes | 277 Views
What NAPLAN and MySchool don’t address (but could, and should) A presentation to the AEU, AGPPA, ASPA National Symposium, 23 July 2010, Sydney. James Ladwig. Intro: just one point.
E N D
What NAPLAN and MySchool don’t address (but could, and should)A presentation to the AEU, AGPPA, ASPA National Symposium, 23 July 2010, Sydney James Ladwig
Intro: just one point • If we are serious about improving student academic learning outcomes, we must start addressing within-school curriculum differentiation (understanding pedagogy and assessment as enacted curriculum)
What we know about the sources of student outcomes: what schools can affect • Multi-level analyses have shown us that variance in student outcomes does not match the ‘school difference’ focus of current policy directions. If we use only three levels, for example: • Differences between schools account for (roughly) 10-15% of student achievement variance • Differences within schools account for roughly 40-45% of student achievement variances • Differences between individual students account for roughly 40-50% of student achievement • Please note estimates vary from study to study, model to model – and just because something occurs at any one level doesn’t mean that’s where it comes from (e.g. individual differences between kids at the start of school are not JUST about the individual kid)
What we know about the sources of student outcomes: what schools can affect continued • In simple terms, we know the things schools can most directly leverage are those sources of student achievement that are themselves most closely connected to student learning outcomes: the ‘enacted curriculum’ (learning experiences of kids, in the classrooms, halls, playgrounds and excursions of schooling). • As an aside: we also know that any serious systematic attempt to improve these things has cascading effects beyond schooling – if we ever get serious about this we will have to dramatically change current and future teaching, career structures and work conditions of teachers, University budgets, government priorities relating to social support, job creation, health, science, and research funding, etc….
Current conditions of significant consequence for the problem • Since the ‘devolution’ policies of the 80s and 90s, most (all?) systems of schooling in Australia ditched the main means of monitoring the quality of classroom practice: the inspectorate. • Since the 90s (since Metherell in NSW) there has been no systemic alternative for monitoring and improving classroom practice • In most of the state and territory systems we have only recently been able to develop testing and data systems that can actually do full population monitoring of achievement at all
Current policy landscape • We have inherited a very limited interpretation of the knowledge about-within school variance: a near singular focus on teachers as the ostensible source of within school variance • And we have a recent policy interest in doing something about the social exclusion effects of economic inequities and the history of racial disparity – which, in education, has been addressed at a school level (between school difference)
PISA 2006School Residuals – means and 95% confidence interval - science
To underline what this tells us • Differences between schools (in terms of student outcomes) are small compared to within school differences • It is very sketchy to make comparisons except at ‘the extremes’ of the range • There is a LOT of variance within schools. For this one unweighted PISA outcome: • Between Schools: 19% of the overall variance • Within Schools: 81% of the overall variance
Within school variance of the enacted curriculum: in class pedagogy: SIPA observational data means, 95% CIs
Variance of Pedagogy • From SIPA observations • Using all three dimensions of QT pedagogy model combined (QT Total) • 322 teachers in 35 Schools • Variance between Schools = 14% • Variance between Teachers = 86% • Of this: • ≈ 1% of school level variance due to socio-economic • ≈ 4% of school level variance due to percent of ATSI in student population
But is this just ‘teacher’ difference?SIPA -113 class averages
What this scatterplot shows • The quality teaching measure here is a coding of the quality of assessment tasks students completed • There is a very big range of average class prior achievement • Note the empty space in the ‘low prior experience’ – ‘high quality pedagogy’ quadrant • The link between quality of the enacted curriculum and average class prior achievement is very strong
Very Open Question • Clearly there is a large amount of pedagogical variance within schools; however, • Not all of this variance is simply ‘between teachers’, much of this variance is between classes, and • We also know there is strong social ‘co-linearity’ between social backgrounds and ‘prior achievement’ (often misrecognised as ‘ability’) • This begs the question: What are the practices of class grouping based on prior achievement? • What is the social distribution within ‘prior achievement’ groups? • How mutable are the groupings? • Over how much time are students within the same groups? • How much do these groups vary from subject to subject?
We have some good indication of how common – PISA 2006 School survey results
Very big, very open question • What are the effects of this ‘ability’ grouping, streaming / tracking in Australia?
Conclusion • Much of the current educational debate, and policy focus, is not really focusing on the main issue if we are going to address our main levers for improving student outcomes: within school curriculum and pedagogy differentiation • While there is a huge amount known internationally, we know very little about how this differentiation plays out in Australia… lots of opinion and anecdote, very, very little rigorous research • And that is only the beginning – if international experience holds here, changing this differentiation is a much bigger challenge than measuring students and schools • So… while we debate current policies, let us not loose sight of the main game.