1 / 25

Impact Evaluation Methods

Impact Evaluation Methods. July 9, 2011 Dhaka Sharon Barnhardt, Assistant Professor I nstitute for Financial Management and Research (IFMR). Impact evaluation methods. Non- or Quasi-Experimental Methods a. Pre-Post Simple Difference Differences-in-Differences Multivariate Regression

maxinen
Download Presentation

Impact Evaluation Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation Methods July 9, 2011 Dhaka Sharon Barnhardt, Assistant Professor Institute for Financial Management and Research (IFMR)

  2. Impact evaluation methods Non- or Quasi-Experimental Methods a. Pre-Post • Simple Difference • Differences-in-Differences • Multivariate Regression • Statistical Matching • Interrupted Time Series • Instrumental Variables • Regression Discontinuity

  3. Methods to estimate impacts • Let’s look at different ways of estimating the impacts using the data from the schools that got a balsakhi • Pre – Post (Before vs. After) • Simple difference • Difference-in-difference • Simple or Multivariate regression • Randomized Experiment

  4. 1 - Pre-post (Before vs. After) Average change in the outcome of interest before and after the programme • Example: • when establishing a causal link is not feasible • measuring the impact of a govt. run literacy campaign state-wide; where it is difficult to construct a comparison group since everyone in the state is mandated to receive the “treatment” at the same time (Sakshar Bharat Campaign in India) • Issues: • does not take into account time-trend • “response-shift bias” – change in the participant’s metric for answering questions from the pre to the post test

  5. 1 - Pre-post (Before vs. After) • QUESTION: Under what conditions can this difference (26.42) be interpreted as the impact of the balsakhi program?

  6. Pre-post Method 1: Before vs. After Impact = 26.42 points? 75 50 25 0 26.42 points? 2002 2003

  7. 2 - Simple difference A post- programme comparison of outcomes between the group that received the programme and a “comparison” group that did not • Example: • programme is rolled out in phases leaving a cohort for comparison, even though assignment of treatment is not random • if Sakshar Bharat was rolled out in a few districts only at a time • Issues: • does not take into account differences that exist before the treatment (selection bias)

  8. 2 - Simple difference Children who got balsakhi Compare test scores of… With test scores of… Children who did not get balsakhi

  9. 2 - Simple difference • QUESTION: Under what conditions can this difference (-5.05) be interpreted as the impact of the balsakhi program?

  10. What would have happened without balsakhi? Method 2: Simple Comparison Impact = -5.05 points? 75 50 25 0 0 -5.05 points? 2002 2003

  11. 3 – Difference-in-Differences or Double Difference Comparison of outcome between a treatment and comparison group (1st difference) and before and after the programme (2nd difference) • Suitability: • programme is rolled out in phases leaving a cohort for comparison, even though assignment of treatment is not random • If Sakshar Bharat was rolled out in a few districts only at a time • Issues: • failure of “parallel trend assumption”, i.e. impact of time on both groups is not similar

  12. 3 – Difference-in-Differences Children who got balsakhi Compare gains in test scores of… With gains in test scores of… Children who did not get balsakhi

  13. 3 - Difference-in-differences

  14. What would have happened without balsakhi? Method 3: Difference-in-differences 75 50 25 0 0 26.42 2002 2003

  15. 3 - Difference-in-differences

  16. What would have happened without balsakhi? Method 3: Difference-in-differences 75 50 25 0 0 26.42 19.60 6.82 points? 2002 2003

  17. 3 - Difference-in-differences • QUESTION: Under what conditions can 6.82 be interpreted as the impact of the balsakhi program?

  18. 4- Simple/Multivariate Regression Change in outcome in the treatment group controlling for observable characteristics • requires theorizing on what observable characteristics may impact the outcome of interest besides the programme • Issues: • how many characteristics can be accounted for? (omission variable bias) • requires a large sample if many factors are to be controlled for

  19. 4 - Regression: controlling for pretest

  20. Impact of Balsakhi - Summary * Statistically significant at the 5% level

  21. Impact of Balsakhi - Summary * Statistically significant at the 5% level

  22. Impact of Balsakhi - Summary *Statistically significant at the 5% level Bottom Line: Which method we use matters!

  23. Another Example: Jaunpur Study • 280 villages in Jaunpur, UP • Intervention: • Provided information to communities on education status and responsibilities of VECs • Encouraged to test children and create community report cards to assess status of education • Trained volunteers in villages to conduct after-school reading classes

  24. Jaunpur Study • Different impact estimates on reading level scores

  25. Jaunpur Study • Different impact estimates on reading level scores

More Related