140 likes | 164 Views
Learn how to demonstrate the impact of your project/service through evidence collection methods and understand outcomes evaluation. This guide outlines steps, types of evidence, and data collection techniques for effective impact assessment.
E N D
Evidencing Outcomes Ruth Mann / George Box Commissioning Strategies Group, NOMS February 2014 UNCLASSIFIED
Have you made an impact? • What does your project/service/intervention set out to achieve? • How can you demonstrate that it has achieved this outcome? • In order to demonstrate that your outcomes are valid and your project/service/intervention has an impact you will need to collect evidence
Example • What sort of evidence would help establish whether a drug treatment service had achieved its aim of reducing offending related to drug misuse? Intermediate outcomes • Fewer positive mandatory drug test results • Greater uptake of methadone replacement treatment • Greater attendance at Narcotics Anonymous meetings • Fewer assaults motivated by drug use • Results from a survey of drug users’ views of their likelihood of misusing drugs before and after use of the service Longer-term outcomes • Reduction in rates of drug-fuelled offending
What is evidence? • In order to demonstrate impact you can measure baseline indicators at the start of your service/intervention and then compare the results to a re-measurement of the baseline at least once (usually post) • It is important to identify objective indicators and have a good understanding of how you are going to collect the evidence in the planning stages of your intervention • Evidence can take a range of forms depending on your service and outcome (surveys, observed behavioural change, interview data, numeric data ect.)
How to collect evidence • Evidence gathering needs to be as simple and cost-effective as possible • Ensure that the data you are collecting relates to the outcome you are measuring • Pilot questionnaires and data collection methods. • Keep detailed records of the recording processes as you go along and make changes if needed
Understanding and presenting evidence • The way in which you assess your outcomes will be dependent on your method of data collection and the types of evidence you are collecting. • Judging the level of impact of an intervention can take multiple forms: • Narrative accounts of a service/intervention • Simple numeric data about, for example, the people who accessed the service (descriptives) • Case studies • Evaluations
What is evaluation? • A type of research which aims to identify how well a project/service/intervention achieves its aims • In CJ evaluations, the aim we are most often interested in is reducing reoffending. • Reconviction is most usual outcome • Intermediate outcomes are being established to use as proxy measures for reconviction as well as legitimate outcomes in their own right UNCLASSIFIED
For this grant, we are looking at evidence of impact • Impact Evaluation • An impact evaluation tries to identify whether or not a policy achieved the outcome it set out to achieve. • This requires identifying firstly whether the outcome was actually achieved, and secondly whether the policy in question, rather than one or more other factors, was responsible. UNCLASSIFIED
NOMS standards for robust evidence • Randomised controlled trials • Statistical (e.g. meta-analysis) or narrative combination of two or more high quality individual studies (i.e. studies that involved matched comparison groups – see below) • Evaluation studies that compare the group receiving the intervention or service with a matched comparison group (using propensity score matching or individual matching techniques) of offenders who did not receive the service • Evaluation studies that demonstrate the value of the intervention or service by comparing the actual reconviction rate against the predicted rate produced from a high quality predictor such as Offenders Group Reconviction Scale (OGRS) UNCLASSIFIED
What are the ingredients necessary for a robust outcome evaluation? • A matched COMPARISON or CONTROL group • Large enough sample to detect a difference between groups • A well defined target group • Appropriate follow-up period • Statistical analysis • Peer review UNCLASSIFIED
For this project we want to see • Systems put in place to gather relevant data systematically • Randomised control trials would be great, but unlikely. You would need to fund them going beyond the life of the project. (for a 1 year reconviction rate you would need at least 18 months of data after the intervention) • Other methods for collecting impact data, such as using the Ministry of Justice Data Lab • Predicted vs. actual reconviction rate designs may also be possible • Impact on intermediate outcomes related to reoffending. UNCLASSIFIED
Get some advice • You might want to have experts in evaluation involved in your project when bidding to design and carry out an evaluation • Consider using the MoJ Data Lab • Advice on evaluations can be found on the MoJ website and gov.uk, and includes - the Cabinet Office Behavioural Insights Team’s publication “Test, Learn, Adapt”, the HM Treasury Magenta Book, the New Philanthropy Capital (NPC) and Clinks joint project ‘Improving Your Evidence’, Project Oracle website • Plan your evaluation before you start the project so you know you are collecting the right data from the start • Make sure you have a well thought through theory of change UNCLASSIFIED
Example Intermediate Outcomes UNCLASSIFIED
Final hints • The most robust evaluations of impact require the collection of data for statistical analysis. • Case studies and process evaluations can provide interesting and useful results but may be limited in how widely the results can be generalised. • Just knowing how many in your cohort reoffended does not tell you what the impact of your service was (i.e. The overall reoffending rate for “all prisons” or “all offenders” or even the prison/Trust concerned is not an appropriate comparison) UNCLASSIFIED