330 likes | 449 Views
Making Social Work Count Lecture 6. An ESRC Curriculum Innovation and Researcher Development Initiative. Evaluative Quantitative Research. Exploring what works using numbers. Overview. Quantitative research is strongly associated with research into whether services or interventions work
E N D
Making Social Work Count Lecture 6 • An ESRC Curriculum Innovation and Researcher Development Initiative
Evaluative Quantitative Research Exploring what works using numbers
Overview • Quantitative research is strongly associated with research into whether services or interventions work • This is because it lends itself to comparison • Before and after • Service A vs Service B etc
Learning outcomes Lecture considers three designs for evaluating what works: • Before and after studies • Quasi-experimental designs • Experimental designs By the end students should have an understanding of: • The rationale • Basics of design and • Potential contribution and limitations of each
Key Question: What Works? Requires specification: “What” “Works” e.g. Service “Outcome/s” And has inherent claims about causality: A causes B “Service” “causes” “Outcome”
Key issues • Internal validity (validity of causal inferences within study) • Can we be sure the “what” is causing the “outcome”? • Can we rule out other explanations? • External validity (validity of causal inferences from study) • The degree to which findings from one study are generalisable • In practice often a trade-off... • A study that tightly specifies the exact nature of the intervention, focuses on a narrow and specified issue and one outcome is likely to have high internal validity • But will findings be generalisable to other settings • And the converse is true…
Before and after studies • Identify a research question – such as whether a particular service works • Measures key outcome/s • Measures them again at a later point
Before and after studies • As for any study, worth asking: • What is the sample? How might this affect results? • What are the measures? Are they valid and reliable? • More specifically, a problem is whether change (or lack of change) can be ascribed to the intervention? • This may be a particular issue for social work – often we are not aiming at making something better, but just at preventing things getting worse. No change may be a success!
Example 1: NQSWs • Carpenter et al (2010) looked at social workers entering a programme aimed at supporting NQSWs entering children’s services • Scheme included 1,100 NQSWs, though 253 dropped out (mostly changed job) • Looking at self-efficacy ratings (n=127) • T1 – 81 (of 110) • T2 – 93 (a statistically significant difference) • Interestingly, at T2 they were asked to review their scores at T1 and they revised them down considerably. If this measure were used the change would be even greater • What are the strengths and limitations of this finding?
Example 2: Homebuilders • 1970s “Homebuilders” model in USA: • Crisis intervention model to prevent care • Very intensive input over 4 to 6 weeks for children at “high risk” of care • Range of “interventions”during this time • Early evaluations: 80 - 97% of children did not enter care following intervention • Fantastic! We can reduce care, keep families together and … save money! • Homebuilders model and adaptations widely replicated – recommended by UK govt
Example 2: Homebuilders • But what could be problems with these findings?
Key question • What would have happened without the service being offered? • Families, NQSWs, everyone are actively involved in seeking solutions – they might sort things out anyway • In particular, when things are bad they tend to get better – because that is sometimes the only place to go… • How can we address this in a research design…?
Quasi-experimental methods • Outcomes are measured for the study group and a reasonable/valid comparison group • For instance: • people with a similar issue who did not receive a service • (often) people in a different area without a specific service
Quasi-experimental methods Advantages Disadvantages The differences between the groups may explain some or all of the differences in outcomes • Allows some insight into what might have happened without a service (i.e. tends to be stronger than before/after studies) • May be more practically or ethically achievable
Example: NQSWs • Carpenter et al (2010) realised that one of the main challenges for their study was that one would expect a lot of change in the first year of practice • Randomization was not possible for policy reasons • Wished to identify similar comparison LAs not running the NQSW programme but was not possible • Eventually recruited 47 volunteer NQSWs from other LAs
Example: NQSWs • The 47 contrast group were similar in key respects (eg gender, ethnicity etc) to the study group • They did significantly less well on outcome measures at end of year 1 • This strengthens the findings – it suggests it may not just be the natural progress of the first year in practice • BUT – beyond age and gender, what were the motivations for this group of volunteer NQSWs? How might this have influenced the findings?
Experimental approaches • What would be ideal is a method that allowed outcomes for people receiving a service (or taking part in a programmeetc) to be compared with similar people who did not • Then – if the only difference between the groups was whether they received the service – differences in outcomes would be due to the service • Such an approach is a Randomized Controlled Trial
Key social work RCTs Cambridge and Sommerville Study • USA 1930s • Intensive, long-term befriending from social worker for boys considered at risk of offending • Based on psychodynamic thinking of time (ie “re-parenting” role for worker) • Much appreciated by young men • Increased offending, reduced employment From Oakley, 2000
Social work and RCTs • Dominant social work approach in 1960s, open-ended psycho-dynamic casework • Reid and Shyne carried out an RCT that simply compared open-ended work with couples with putting an 8-week time limit on work: • Long-term work LESS effective than short term • Parents preferred ST work • So did workers • Better on every outcome • Developed task-centred approaches based on approach
Homebuilders • US government carried out largest ever social work trial – of Intensive Family Preservation Services (based loosely on Homebuilder) • Families randomized to IFP (756) and Normal Service (535) • Follow-up at 3 and 12 months • Child welfare, family functioning and care entry compared
Homebuilders • Families liked the service • Other outcomes: • Placement rates of children into care No differences • Functioning of children and families No differences • Length and intensity of intervention No differences
Questions • What are the strengths of RCTs • What are its limitations?
Strengths of RCTs • They provide strong and credible evidence that something has made a difference • They also provide evidence that some things do NOT work – progress is also made by abandoning things that do not work • They rule out other explanations (ie they reduce bias in research) • They make the complex real world simple… this is both their strength AND their weakness. It is a strength because if you want to study one thing they allow a focus on that. A weakness because they do not capture complexity of factors affecting what happens
Weaknesses of RCTs • Just because something works in one place does not mean it will work in other places • It is in fact difficult to be sure what the service being evaluated what it seems to be • Issue of Implementation Fidelity (is the service being delivered as it meant to be e.g. many of the IFPs ran waiting lists – not part of a crisis intervention model!) • Often many other features of the intervention that are adapted e.g. choice of workers etc • Again, their simplicity is thus also a weakness
Debate about RCTs • RCTs are the marmite of social research • Some see them as the only really rigorous test of effectiveness • Lists of interventions “proven” in RCTs are made • Funders and policymaker use these to decide what services to invest in
Debate about RCTs • Others argue that this is overly simplistic • That RCTs do not take enough account of context • That they struggle to produce generalisable knowledge about what works • That they often suggest things do not work
You decide… Ultimately you should understand what an RCT is – and then YOU decide But perhaps it is time we stopped thinking of it like Marmite… perhaps we can have a critical appreciation of the strengths and weaknesses of RCTs as we do with most other methods
Learning outcomes Lecture considers three designs for evaluating what works: • Before and after studies • Quasi-experimental designs • Experimental designs Students should have an understanding of: • The rationale • Basics of design and • Potential contribution and limitations of each
Activity - Part A • Get the students into small groups and then ask them an evaluative research question. It is probably best to give them a question. These might be appropriate: • Does training in Solution Focussed practice improve outcomes in working with women experiencing domestic violence? Or • Does a task centred approach lead to better outcomes for people with dementia • The group then brainstorm an outline of a research design. This should include: • Study sample • Outcome measure • Any particular challenges or problems that come up • Each group can than (a) present back their general approach highlighting any questions that arose or (b) just present questions that arose during this process.
Activity - Part B • If more time is available, fashion a question based on one of the articles that accompanies this lecture. Give the students the research question and follow the process as above. • Student can then either (a) be given the research paper reporting the actual study and they can discuss and critique or (b) they can be given access to the paper (either hard copy or online) so that they can read the actual approach taken.
References • Medical Research Council (2008) Developing and evaluating complex interventions: new guidance, http://www.mrc.ac.uk/Utilities/Documentrecord/index.htm?d=MRC004871 • University of Wisconsin (undated – accesses 2012) Centre for Program Evaluation and Development, Report and Presentation • http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html • Pawson, R. and Tilley, N. (1998) Realistic Evaluation, Sage