300 likes | 489 Views
What Works? Evaluating the Impact of Active Labor Market Policies. May 2010, Budapest, Hungary Joost de Laat (PhD), Economist, Human Development. Outline Why Evidence Based Decision Making? Active Labor Market Policies: Summary of Findings
E N D
What Works? Evaluating the Impact of Active Labor Market Policies May 2010, Budapest, Hungary Joost de Laat (PhD), Economist, Human Development
Outline Why Evidence Based Decision Making? Active Labor Market Policies: Summary of Findings Where is the Evidence? The Challenge of Evaluating Program Impact Ex Ante and Ex Post Evaluation
Why Evidence Based Decision Making? • Limited resources to address needs • Multiple policy options to address needs • Rigorous evidence often lacking to prioritize policy options and program elements
Active Labor Market Policies: Getting Unemployed into Jobs Improve matching of workers and jobs Assist in job search Improve quality of labor supply Business training, vocational training Provide direct labor incentives Job creation schemes such as public works
International Evidence on Effectiveness of ALMPs Active Labor Market Policy Evaluations: A Meta Analysis. By David Card, Jochen Kluve, and Andrea Weber (2009) Review of 97 studies between 1995-2007 The Effectiveness of European Active Labor Market Policy. By Jochen Kluve (2006) Review of 73 studies between 2002-2005
Do ALMPs Help Unemployed Find Work? (Card et al. (2009), Kluve (2006)) Subsidized public sector employment Relatively Ineffective Job search assistance (often least expensive) Generally favorable, especially in short run Combined with sanctions (e.g. UK “New Deal”) promising Classroom and on-the-job training Not especially favorable in short-run More positive impacts after 2 years
Do ALMPs Help Unemployed Find Work? (Card et al. (2009), Kluve (2006)) ALMPs targeted at youth Findings mixed
The Impact Evaluation Challenge Impact is difference in outcome withand withoutprogram for those beneficiaries who participate in the program Problem: beneficiaries have only one existence; they participate in the program or they do not.
$2000 Program Impact = $1000 extra income? $1000 before after Skills Training Impact Evaluation Challenge: before – after comparison ok? Income for beneficiary increases from $1000 to $2000 after training
$2000 NO! Program Impact = $500 $1500 $1000 before after NO Skills Training Impact Evaluation Challenge: before – after often incorrect Income for the same person but without training would have increased from $1000 to $1500 because of improving economy
Impact Evaluation Challenge • Solution: a proper comparison group • Comparison outcomes must be identical to treatment group outcomes, if the treatment group did not participate in the program.
Impact Evaluation Approaches • Ex ante: • Randomized evaluations • Double-difference (DD) methods • Ex post: • 3. Propensity score matching (PSM) • 4. Regression discontinuity (RD) design • 5. Instrumental variable (IV) methods 13
Program Impact = $500 $2000 $1500 $1000 before after Skills Training Random assignment Income treatment group is $2000 Income comparison group is $1500
Randomized Assignment Ensures Proper Comparison Group • Ensures treatment and comparison at start of program are the same (background and outcomes) • Any differences that arise after program must be due to the program and not due to selection-bias • “Gold” standard for evaluations; not always feasible
Examples Randomized ALMP Evaluations Improve matching of workers and jobs Counseling the unemployed in France Improve quality of labor supply Providing vocationally focused training for disadvantaged youth in USA (Job Corps) Provide direct labor demand / supply incentives Canadian Self-Sufficiency Project 16
Challenges to Randomized Designs • Cost • Ethical concerns: withholding a potentially beneficial program may be unethical • Ethical concern must be balanced with: • programs cannot reach all beneficiaries (and randomization may be fairest) • knowing the program impact may have large potential benefits for society … 17
Societal Benefits • Rigorous findings lead to scale-up: • Various US ALMP programs – funding by US Congresscontingent on positive IE findings • Opportunidades (PROGRESA) – Mexico • Primary school deworming – Kenya • Balsakhi remedial education – India 18
Ongoing (Randomized) Impact Evaluations: From MIT Poverty Action Lab Website (2009)
World Bank’s Development Impact Evaluation Initiative (DIME) 12 Impact Evaluation Clusters: Conditional Cash Transfers Early Childhood Development Education Service Delivery HIV/AIDS Treatment and Prevention Local Development Malaria Control Pay-for-Performance in Health Rural Roads Rural Electrification Urban Upgrading ALMP and Youth Employment
Other Evaluation Approaches • Ex ante: • Randomized evaluations • Double-difference (DD) methods • Ex post: • 3. Propensity score matching (PSM) • 4. Regression discontinuity (RD) design • 5. Instrumental variable (IV) methods 21
Non-Randomized Impact Evaluations “Quasi-experimental methods” • Comparison group constructed by evaluator • Challenge: evaluator can never be sure if behaviour of comparison group mimics that of treatment group without program: selection bias
Example: Suppose Only Very Motivated Underemployed Seek Extra Skills Training • Data on (very motivated) under-employed individuals who participated in skills training. • Construct comparison group from (less motivated) under-employed who did not participate in skills training. • DD method: evaluator compares increase in average incomes between two groups 23
Double-Difference (DD) Method Treatment group Program impact (positive bias) Comparison group (non-randomization) 24
Non-experimental design • May provide unbiased impact answer • Relies on assumptions regarding comparison • Usually impossible to verify assumptions • Bias always smaller if evaluator has detailed background variables (covariates) 25
Assessing Validity of Non-Randomized Impact Evaluations • Verify pre-program characteristics are same between treatment and comparison • Test ‘impact’ of program on outcome variable that should not be affected by the program • Note: will always hold in properly designed randomized evaluations
Conclusion • Everything else equal, experimental designs are preferred. Assess case-by-case. • Most appropriate when: • New program in pilot phase • Not in pilot phase but receives large amounts of resources and its impact is questioned • Non-experimental evaluations often cheaper; interpretation of results requires more scrutiny 27
THANK YOU! 28
Impact Evaluation Resources World Bank (2010) “Handbook of Impact Evaluations” by Khandker et al. www.worldbank.org/sief www.worldbank.org/dime www.worldbank.org/impactevaluation www.worldbank.org/eca/impactevaluation (last site coming soon) http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evaluation_en.htm www.povertyactionlab.org http://evidencebasedprograms.org/