240 likes | 253 Views
Martyn Hammersley The Open University UK Course on ‘The controversial discourse of evidence-based policy and practice in education’, University of Münster, July 2015. Evidence-based policy and practice: A popular myth?. My position. What is good about the notion of research-based practice?
E N D
Martyn Hammersley The Open University UK Course on ‘The controversial discourse of evidence-based policy and practice in education’, University of Münster, July 2015 Evidence-based policy and practice: A popular myth?
My position What is good about the notion of research-based practice? • Its emphasis on the need for reliable knowledge, and its challenge to the assumption that we can immediately see what should be done. What is wrong with it? • It exaggerates the reliability of research knowledge, and its role in policy and practice; • It exaggerates the closeness of the link between the availability of reliable evidence and beneficial practical or policy outcomes.
A theme with a long history • Caution against extreme positions: the tendency to counterpose these against one another. • My views are not new, you will find much of what I have to say already present in the huge literature on the relationship between research and policy/practice, a literature prompted by recurrent crises in this relationship. • See, for example, Nisbet and Broadfoot 1980 and Anderson and Biddle 1991.
Evidence-based Medicine The evidence-based medicine movement began in the 1980s, championed particularly by clinical epidemiologists (Pope 2003). In its most radical and newsworthy form it treated randomised controlled trials (RCTs), or the synthesis of their findings in systematic reviews, as the only effective way of determining ‘what works’. Clinicians were required to access research evidence of this kind, and to use only those treatments that had been scientifically validated.
The spread of ‘evidence-based practice’ • The idea of evidence-based practice came to be supported by health service managers and by government policymakers; and was extended to new areas, including education. • One reason for this was that it fitted with the ‘new public management’ that became influential in the 1990s, and that continues to shape government today, with its concern to make public sector professionals more ‘transparently’ accountable (Pollitt 1990; Lynn 2006; Head 2008).
The task of research is to produce evidence that shows which policies and practices are effective (‘what works’), and which are not. Policies and practices should be based on this research evidence. Practitioners should follow what the evidence recommends, and not use unproven methods. The classical model of evidence-based-practice
Rubbishing the rivals: 'Tradition, Prejudice, Dogma, Ideology' (Caroline Cox, as cited by David Hargreaves 1996) ‘Fad’ and ‘Fashion’ (Robert Slavin 2002) 'Theory' (Iain Chalmers 2005)
Crisis in education research (UK) • Recurrent attacks on academic educational research and moves to transform it into policy- or practice-focused inquiry. • Charles Clarke, then Parliamentary Under-Secretary of State, declared that the aim was to ‘resurrect educational research in order to raise standards’ (Clarke 1998, my emphasis). • Chris Woodhead (1998), then Chief Inspector of Schools, announced that 'considerable sums of public money are being pumped into research of dubious quality and little value’.
Myth of research-based practice The myth = that research can tell us what is the best way to teach, or to do anything else. Three reasons why this is misleading: a) Research cannot validate value conclusions: the ambiguity of ‘what works’. b) It can only provide limited and fallible evidence about the effects of particular policies or practices. c) Research evidence must always be combined with local knowledge in professional judgments about what is best in particular contexts.
Liberalisation • The arguments for evidence-based practice are sometimes presented in more qualified terms, as regards both what is to count as evidence, and what role research is to play in policymaking and practice. • Some of this liberalisation has been reflected in a switch in terminology from ‘evidence-based’ to ‘evidence-informed’. • However, while abandoning the classical model is sensible, this robs the position of much of its distinctiveness; and little attention has been given to the complexities introduced.
Two sides of the issue • Assumptions about and implications for research: Classical model: Only RCTs and systematic reviews are worthwhile, or at least they are the gold standard. 2. Assumptions about and implications for policymaking and practice: Classical model: A rational process of decision-making in which the prime considerations are technical ones about effectiveness.
The classical model:What counts as evidence? Systematic reviews of randomised controlled trials (RCTs). So, what is ruled out is: • Evidence from a single RCT. • Evidence from other sorts of research. • Evidence from practical experience, including ‘trial and error’. • Personal judgment (as against applying standardised procedures that have been tested).
The weaknesses of RCTs There are issues about both internal (Worrall 2007) and external validity (Cartwright 2007). In the drug field, RCTs are used as a complement to laboratory work, which will have produced a considerable body of knowledge about the drug. However, in education they are usually treated as providing the whole basis for causal claims. While they may be able to demonstrate an empirical pattern, in themselves RCTs cannot tell us about the causal mechanisms involved, and the conditions under which these operate.
Further weaknesses of RCTs • There are major practical problems in enforcing control over variables (Gueron 2002), not least as regards blinding. • In the context of educational research, there are severe measurement problems; treatments can rarely be standardised, in the research itself or beyond; outcomes may be short-term, or delayed in onset; and ‘in the wild’ there will be considerable contingency in the outcomes produced (Hammersley 2002, 2005, and 2013).
The weaknesses of systematic reviews • The ideal of exhaustiveness can lead to excessive time being given to searching for material as against reading it carefully and deciding which are more and less important contributions. • The commitment to synthesis tends to be interpreted in an aggregative rather than an integrative fashion. • There is an ambivalence about whether systematic reviews are literature reviews or a form of meta-analysis.
A technical view of policymaking and practice • The image is of policymakers and practitioners being faced with problems and needing to know what options there are for dealing with them and which work best. • But educational policymakers and practitioners are not usually faced with standard, well-defined problems, nor can what they do be standardised even to the same extent as, for example, much medical treatment.
Back to Aristotle! Phronesis What is required is wise judgment, both as regards assessment of evidence and the weighing of this against other considerations, in order to make decisions. Also essential is attention to the implementation process, with a willingness to reconsider and redesign a policy or a line of practice to take account of problems encountered. Both policymaking and practice are processes, not one-shot decisions.
The ‘ugly’ face of politics • Politicians often give superficial attention to evidence, and misrepresent and misuse it. • But, in part, this simply reflects the nature of their task: in a democratic system, some policies, however good, are impossible to get support for because of the likely reaction of the media, of opposition parties, and of powerful interest groups. There are also practical considerations relevant to policymaking which research evidence does not cover. • Over and above this, we should not assume that policymaking is always rational or benign.
The three-way character of the relationship • Research as a tool that policymakers use to control practitioners? • The connection between the notion of evidence-based practice and the ‘new public management’. • The attack on professionalism.
The ‘classical model’ and the distortions of debate • Classical and liberalised versions of EBPP, and oscillation between the two. • There is a danger of stereotyping and exaggeration. • This has occurred on both sides: the critics of EBPP are quite varied, but there is a tendency to dismiss them as naïve, Luddite, or postmodernist.
Summary What is wrong with the notion of research-based policymaking and practice? • It exaggerates the reliability of research knowledge, and its role in policy and practice; • It exaggerates the closeness of the link between the availability of reliable evidence and beneficial practical or policy outcomes. What is good about it? • At the very least it prompts thought about important issues concerning the validity and practical relevance of research findings.
Final thoughts • If we are interested in promoting good policy and practice, we need to consider what this involves, rather than simply assuming that what is required is that it be research-based. • If we are interested in seeking to enable educational research to flourish, we should not begin from the assumption that its primary aim is telling policymakers and practitioners ‘what works’. It can and does have other aims. • One effect of the crisis in educational research caused by the rise of the EBPP movement has been to distort the focus of funded research.
References Anderson, D. and Biddle, B. (eds) Knowledge for Policy: Improving education through research, London, Falmer Press. Cartwright, N. (2007) ‘Are RCTs the gold standard?’, Biosocieties, 2, 1, pp11-20. (Available at: http://www.lse.ac.uk/CPNSS/research/concludedResearchProjects/ContingencyDissentInScience/DP/Cartwright.pdf.) Cartwright, N. and Hardie, J. (2012) Evidence-Based Policy, Oxford, Oxford University Press. Chalmers, N. (2003) ‘Trying to do more good than harm in policy and practice’, Annals of the American Academy of Political and Social Science, 589, 22-40. Clarke, C. (1998) ‘Resurrecting research to raise standards’, Social Sciences : news from the ESRC, Issue 40, October, p. 2. Dunne, J. (1997) Back to the Rough Ground: practical judgment and the lure of technique, Notre Dame IN, University of Notre Dame Press. Gueron, J. (2002) ‘The Politics of Random Assignment’, in Mosteller, F. and Boruch, R. F. (eds.) Evidence Matters: Randomized trials in education research, Washington D.C., Brookings Institution Press. Hammersley, M. (2002) Educational Research, Policymaking and Practice, London, Paul Chapman/Sage. Hammersley, M. (2005) ‘Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmers’ case for research-based policymaking and practice’, Evidence and Policy, vol. 1, no. 1, pp1-16. Hammersley, M. (ed.) (2007) Educational Research and Evidence-based Practice, London, Sage. Hammersley, M. (2013) The Myth of Research-Based Policy and Practice, London, Sage.
References contd. Hargreaves, D.H. (1996). Teaching as a research-based profession: possibilities and prospects (Annual Lecture). London: Teacher Training Agency. [Reprinted, along with some responses, in Hammersley ed. 2007.] Haynes, L., Service, O., Goldacre, B., and Torgerson, D. (2012) Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, London, Behavioural Insights Team, Cabinet Office, UK Government. Head, B. (2008) ‘Three lenses of evidence-based policy’, Australian Journal of Public Administration, 67, 1, pp1–11. Lynn, L. (2006) Public Management: Old and New, London, Routledge.Oakley, A. (2000) Experiments in Knowing, Bristol, Polity. Nisbet, J. and Broadfoot, P. (1980) The Impact of Research on Policy and Practice in Education, Aberdeen, Aberdeen University Press. Pollitt, C. (1990) Managerialism and the Public Services, Oxford, Blackwell. Pope, C. (2003) ‘Resisting evidence: evidence-based medicine as a contemporary social movement’, Health: An Interdisciplinary Journal, 7, 3, pp267–282. Torgerson, C. (2014) ‘What works…and who listens? Encouraging the experimental evidence base in education and the social sciences’. Inaugural lecture, Durham University. Torgerson, D. and Torgerson, C. (2008) Designing Randomised Trials in Health, Education, and the Social Sciences, Basingstoke, Palgrave Macmillan. Woodhead, C. (1998) preface to Tooley, J. Educational Research: critique, London, Ofsted. Worrall, J. (2002) ‘What Evidence in Evidence-Based Medicine?’, Philosophy of Science, 69, pp. S316-330 Worrall, J. (2007) ‘Why there’s no cause to randomize’, British Journal of Philosophy of Science, 58, 3, pp451-88. (Earliler version available at: http://www.lse.ac.uk/CPNSS/pdf/DP_withCover_Causality/CTR%2024-04-C.pdf)