150 likes | 248 Views
Using randomised control trials to evaluate public policy – Presentation to DIISRTE/DEEWR workshop, January 31. Jeff Borland Department of Economics University of Melbourne. 1. Outline. Why do RCTs?
E N D
Using randomised control trials to evaluate public policy – Presentation to DIISRTE/DEEWR workshop, January 31 Jeff Borland Department of Economics University of Melbourne
1. Outline • Why do RCTs? • Case studies of RCTs I am involved with (jointly with Yi-Ping Tseng at the Melbourne Institute) • Criteria for determining feasibility and value • Designing and conducting an RCT
2. Why do RCTs? • A sure way to solve the ‘evaluation problem’: Can create a control group that can be regarded as identical to the treatment group except being affected by the policy intervention. • Flexibility: Can test exactly the policy you want to evaluate. Can test a ‘whole’ policy, or its component parts. Can test the effect of a policy, or the causal mechanism that is believed to underlie behind the effect of the policy.
3. Case study 1: YP4 – Case management for young homeless jobseekers a. Main features: • Intervention: Assignment of a case manager to help tailor and coordinate available services to reflect the specific circumstances of young homeless jobseekers – for 18 to 30 months. • Partners: Project undertaken at initiative of and managed by Hanover Welfare and 3 other not-for-profit partners, each responsible for a geographic location (Cheltenham, Frankston, Bendigo and Inner Melbourne).
Eligibility: Required to be aged 18 to 35 years, in receipt of Newstart Allowance or Youth Allowance (other), homeless or with a history of homelessness, and ‘disadvantaged’, as evidenced by eligibility for the Personal Support Program (PSP), Job Placement, Employment and Training (JPET) program or Intensive Support-Customised Assistance (ISCA). • Timing: Recruitment took place over the period January to December 2005, all case management services ceased in June 2007, and final data collection was completed in early 2009.
Size of trial: Target of 240 treatment and 280 control participants – Ultimately 189 treatment and 166 control participants. • Outcomes: Income support recipiency; DEEWR program expenditure; Employment status; Housing status; Self-rated health and well-being; Participation in community activities. [Use both administrative data and own-survey data.] Measured 1, 2 and 3 years after commencement.
b. Main findings • Little evidence of effect of YP4 on outcomes (Even when seek to assess effect of length of treatment) => ‘You get what you pay for’. c. Lessons we learned: • Need to ask: Is the intervention worth studying? • The importance of ‘pre-testing’ eligibility criteria • The difficulty of ensuring randomisation happens • The importance of collecting data along the way
4. Case study 2: EYEP – The early years education program a. Main features: • Intervention: Children receive 5 days per week of high-quality education and care totalling at least 25 hours – for 3 years.Key features - High staff/child ratios, qualified staff, rigorously developed curriculum, use of relationship-based pedagogy; and focus on alliance with parents. • Partner: Project undertaken at initiative of and managed by Children’s Protection Society.
Eligibility: Children must be aged from 0 to 3 years, and assessed as having two or more risk factors in the Department of Human Services (DHS) Best Interest Practice Guidelines (eg., having teenage parents, parental substance abuse, and the presence of family violence). • Timing: Recruitment commenced in 2011, to be completed by end of 2013. Data collection will be complete by the end of 2016.
Size of trial: Target of 45 treatment and 45 control participants. • Outcomes: Data collected on children include measures of physical and mental health, child development, language development and service usage - via standardized assessments, parent and childcare educator questionnaires, and observation and interviews. Measured 1, 2 and 3 years after commencement. Use of data from LSAC provides an extra control group.
b. Lessons we have learned: • Need for ‘champion(s)’ within organisation who have authority • Role of research committee • The importance of a pilot phase • One model for ensuring randomisation happens • Implementing trial via dedicated high-quality researcher who is independent of provision of the program • Scope for partner selection bias
5. Criteria for determining feasibility and value • Is the intervention worth studying? (Cost-benefit of doing the trial versus the gain to society from better policy-making. Some factors to consider: Size and scope of intervention; What is known already?; What can be learned using alternative approaches?) • Is it ethical? • Is it possible to implement a RCT? (Can think creatively: Early partial roll-out; Differences in dosage between regions/population groups)
6. Designing and conducting a RCT • A big message: Need to think about the right approach for evaluation on a case-by-case basis • Another big message: Worry about design and implementation. Get the management right. • (i) Starting off: • Put together a research committee • Define policy you are interested in testing and its expected benefits • Understand theory and relevant existing research
(ii) Getting into the details of design: • Define the intervention(s) – What happens to treatment group? What happens to control group? Ways of dealing with substitution bias? • Defining outcome measures • Defining eligibility (What will be external validity?) • Efficacy versus effectiveness (eg., partner selection bias) • Choosing a process for randomisation • Deciding on size of trial • How will data on outcomes be collected? • How to minimise drop-out?
(iii) Implementation: • Doing a pilot • Create a culture of ‘doing it right’ (eg., commitment of partner organisations; getting the researcher(s) who will implement the trial). • Monitoring implementation of intervention • (iv) Reporting on the trial: • Protocol for reporting on trial