1 / 32

Impact monitoring

Impact monitoring. Evaluation of the effectiveness of Work and Income employment assistance. Introduction. Internal evaluation role Impact evaluation the evaluation question impact evaluation: the counterfactual approach Impact monitoring advantages over impact evaluation

Download Presentation

Impact monitoring

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact monitoring Evaluation of the effectiveness of Work and Income employment assistance

  2. Introduction Internal evaluation role Impact evaluation • the evaluation question • impact evaluation: the counterfactual approach Impact monitoring • advantages over impact evaluation • why is it possible today: falling cost of data Example: Work and Income employment assistance • propensity matching: workhorse impact method • application of impact monitoring: Training Opportunities Summary Questions

  3. Internal evaluation • Organisations usually have a range of functions and interventions • Internal evaluation is about building an evidence base on how best an organisation can deliver effective services • Emphasis is not on one intervention, but on all interventions delivered by the organisation • Additional challenges: • continuous process of reform and adjustment • operating within a changing policy and social context • often with very short development cycles • Gaol of alignment: need to accelerate the evidence generation and slow down the development cycle

  4. Counterfactual approach to Impact Evaluation

  5. Evaluation is a question What works, for whom and why? How does it work in practice? What is the intervention logic of the intervention? Does practice vary? Who participates and who is affected? What are the casual links? Are the causal links supported by the evidence? What impact does it have on outcomes? Immediate outcomes Revise the intervention logic to be consistent with the evidence Long term outcomes Unintended outcomes Does the new logic achieve the original intervention goal? Do impacts vary across participants

  6. Impact evaluation framework Pawson and Tilley (1997) Context-Mechanism-Outcome • Mechanisms are the actions of organisations designed to change outcomes for the better • regulations and taxation • programmes and services • social marketing • Outcomes tell you about the state of the world and how it changes over time • in employment or not- educational achievement • sense of well being Mechanism Outcomes • Impact is about understanding the how the mechanism influences outcomes within its context • theory of change • testing the theory against evidence • Context is everything else that exists in the space that the mechanism operates within • social, cultural, community • physical space • legislative/policy settings • delivery organisation

  7. Counterfactual designs • Counterfactual: is the outcomes that would have occurred in the absence of the intervention Observed outcomes Impact Counterfactual Outcome Intervention • Impact: the intervention’s contribution to the outcome

  8. Counterfactual designs Intervention Counterfactual designs contrast two CMO scenarios Counterfactual One with the mechanism being evaluated and one without Alternative Mechanism Outcomes Outcomes The reality is that either the counterfactual involves an alternative intervention or the absence of the intervention changes the context For this reason understanding what happens in the counterfactual CMO is as important as understanding the intervention CMO

  9. The counterfactual black box • Counterfactual designs give evidence for causal links • But, they do not explain them Counterfactual establishes We know it works Intervention Outcome Intervention increased outcome Cannot distinguish between equivalent causal explanations But we do not know why it works 1 2 3 or 1 2 Black box

  10. Unpacking the black box There many ways to unpack the black box The counterfactual approach is to examine intermediate outcomes Counterfactual establishes casual relationships Intervention Outcome 1 Outcome 2 Outcome 3 + + + Casual black boxes remain  1 2 3 Reduces the range of alternative casual explanations or  1 2

  11. Context matters • Context is often forgotten when using impact evidence • but casual mechanisms are contextually based • The trick is: • knowing if context has changed • working out how this changes the casual mechanism • Counterfactual evidence is most often presented without context • Why: • practitioners of counterfactual methods are often themselves removed from the context • counterfactual evidence can easily be abstracted from its context

  12. Comparable results • Independent evaluations are difficult to use when comparing the impact of interventions • Impact evidence will depend on the design, the outcome measures used, what the counterfactual represents • Cannot be certain whether differences in intervention impacts stem from: • real differences in causal mechanisms • difference in impact method

  13. The development of Impact monitoring

  14. Impact monitoring • Impact evaluation needs to be: • Robust: decision makers have confidence about the difference interventions make • Impact monitoring has the additional features of being: • Consistent: enable direct comparison of intervention impacts • Comprehensive: cover the bulk of interventions • Up to date: relevant to current decisions • These additional features also require impact monitoring to be: • Efficient: at a low per intervention cost

  15. Impact monitoring as a solution • Impact monitoring has the potential to address many of the challenges of impact evaluation

  16. Making impact monitoring possible • Impact monitoring is becoming feasible because: • machine readable administrative information • increasing computing power • falling cost of data storage • administrative data linking • The main implication of these changes are: • to lower the cost of measuring outcomes • outcomes are measured in the same way across populations • increased range of outcomes • rich profile information on individuals

  17. Linked data • Linked data means we can look at many dimensions of a person’s life. • Cross agency linked data is of particular value. Client’s life stages 0-5 6-17 25-40 18-24 Linked client records Child protection Findings Care ECE School Poly Education Justice YJ Prison UB OCB Welfare child S2W PAYE Tax

  18. Impact monitoring of Employment Assistance

  19. MSD impact monitoring • Impact monitoring Work and Income assistance for the last 10 years • Work and Income employment assistance: • training, job search, wage subsidy programmes • Outcomes: • Off benefit, tertiary study, part time work, subsequent assistance, Work and Income expenditure • Next (IDI outcomes): earnings and employment, educational achievement, justice, migration • Duration: • current longest outcome period is 13 years

  20. Propensity matching • Work horse method for impact monitoring: • highly automated method • efficient • reduces influence of analyst bias • easy to maintain and store results • independent of post participation outcomes • easy to explain to decision makers • Propensity matching works well with administrative data: • information on large numbers of non-participants • rich profile information (especially prior outcomes) • consistent measurement between participants and non-participants

  21.               Propensity matching: Short version Demographics Based on participant’s observed profile, propensity matching selects a comparison group with the same average profile Skills / Education Observed Labor market Previous outcomes Motivation Assumes the profile of unobserved characteristics will also be the same ? Unobserved* Attitude Networks *: uncorrelated to observed characteristics

  22. Propensity matching: Long version • Propensity score matching was proposed by Rosenbaum & Rubin (1983) • Propensity (P) is the likelihood of participating in an intervention based on a observed profile (X) P(0,1) = f(X) • Both participants and non-participant have a propensity to participate • A comparison group matched on propensity score will have the same average profile as the participants • Enforces ‘common support’: • matching only works if there are non-participants with similar propensity to participate as participants

  23. Propensity matching: Long version Differences in propensity score distribution reflects differences in observed average profiles participants non-participants comparison Common support problem Only can match where participant and non-participant score overlap • Matching ensures we compare like groups • preferred over multivariate regression estimates • can be combined with differences-in-differences to further reduce bias in the impact estimate

  24. Impact monitoring of Training Opportunities

  25. Training Opportunities • Training Opportunities (TOPs) was a programme to assist clients without basic school qualifications gain basic skills and qualifications • qualifications were at NQF 2 and below • free to participate • up to two years in duration, usually around six months • targeted at clients at risk of long term benefit receipt • contracted out to external providers • In 2011, TOPs funding was around 80 million a year • Nominally an MSD programme, but administered by Tertiary Education Commission (TEC)

  26. TOPs impact Participants starting between 2000 and 2002 % off main benefit each month from start date Matched at participation start Large lock in effect Modest post participation effect Impact profile is the difference in outcomes between the participant and comparison group Impact after 7.5 years: 12.5 days Advice: modest impact possible in the long term (10+ years)

  27. TOPs impact for 2000 to 2007 participants • Why? CMO - Changing context • strong labour demand especially for unskilled labour • falling eligible population increasing numbers of low risk participants • Changing context alters impact by: • increased opportunity cost of participating • lower labour market value of skills gained 2000-2002 Comparing impact profiles for participant cohorts helps provide early predictions of long term impacts 2006 2004 2007 • Over the 2000s TOPs impact decreased • increased lock in • no post-participation effect

  28. TOPs ends • In response of the above findings, Ministers decided to split TOPs into two programmes • Foundation Focused Training Opportunities (FFTO) • restricted to high risk clients (based on a statistical risk profiling tool) • no more than six months in duration • foundation skills (literacy and numeracy) • Training for Work (TfW) • no more than three months duration • medium risk clients • work focused training • Programmes introduced in 2011

  29. FFTO and TfW impact in 2012 • Reported on the early impact of TfW and FFTO in 2012 TfW showing a shorter lock in effect and positive post participation impact TFW TOPs FFTO FFTO impact profile was similar to TOPs • In response to this evidence, Ministers decided to end FFTO in 2013 • funding transferred to the Ministry of Education to fund free education to NQF 2 • MSD no longer monitors the impact of this funding

  30. Training for Work cohorts • Training for Work impacts continue to improve 2012 2011 Positive post participation effect 2013 Smaller lock in effect • Likely explanations (read conjecture): • better targeting → fewer low risk participants → lower lock in • tighter contract performance → higher post participation impact

  31. Summary • Impact monitoring increases the utility of impact information for decision makers • more likely that we see evidence based decisions • Combined with measures of diverse outcomes, impact monitoring enables more precise testing of intervention logic • can better target qualitative research • This is only possible through investment in data: • developing good electronic administrative systems • avoiding isolated systems (eg common client ids) • Agencies have proper data warehousing • linking and sharing of agency data (eg SNZ IDI)

  32. Any Questions?

More Related