1 / 42

Quasi-Experimental Methods

Explore various quasi-experimental methods to evaluate the impact of an HIV/AIDS program on teenage pregnancy rates, analyzing assumptions and counterfactuals for robust causal inference.

Download Presentation

Quasi-Experimental Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quasi-Experimental Methods Jean-Louis Arcand The Graduate Institute | Geneva jean-louis.arcand@graduateinstitute.ch

  2. Objective Reality check • Find a plausible counterfactual • Every method is associated with an assumption • The stronger the assumption the more we need to worry about the causal effect • Question your assumptions

  3. Program to evaluate • Hopetown HIV/AIDS Program (2008-2012) • Objectives • Reduce HIV transmission • Intervention: Peer education • Target group: Youth 15-24 • Indicator: Pregnancy rate (proxy for unprotected sex)

  4. I.Before-after identification strategy (aka reflexive comparison) Counterfactual: Rate of pregnancy observed before program started EFFECT = After minus Before

  5. Counterfactual assumption: no change over time Effect = +3.47 Intervention Question: what else might have happened in 2008-2012 to affect teen pregnancy?

  6. Examine assumption with prior data Assumption of no change over time looks a bit shaky

  7. II. Non-participant identification strategy Counterfactual: Rate of pregnancy among non-participants

  8. Counterfactual assumption:Without intervention participants have same pregnancy rate as non-participants Participants Effect = +8.87 Non-participants Question: how might participants differ from non-participants?

  9. Test assumption with pre-program data ? REJECT counterfactual hypothesis of same pregnancy rates

  10. III. Difference-in-Difference identification strategy • Counterfactual: • Nonparticipant rate of pregnancy, purging pre-program differences in participants/nonparticipants • “Before” rate of pregnancy, purging before-after change for nonparticipants • 1 and 2 are equivalent

  11. Effect = 3.47 – 11.13 = - 7.66 Participants 66.37 – 62.90 = 3.47 57.50 - 46.37 = 11.13 Non-participants

  12. Effect = 8.87 – 16.53 = - 7.66 Before 66.37 – 57.50 = 8.87 62.90 – 46.37 = 16.53 After

  13. Counterfactual assumption: Without intervention participants and nonparticipants’ pregnancy rates follow same trends

  14. 74.0 16.5

  15. 74.0 -7.6

  16. Questioning the assumption • Why might participants’ trends differ from that of nonparticipants?

  17. Examine assumption with pre-program data counterfactual hypothesis of same trends doesn’t look so believable

  18. IV. Matching with Difference-in-Difference identification strategy Counterfactual: Comparison group is constructed by pairing each program participant with a “similar” nonparticipant using larger dataset – creating a control group from similar (in observable ways) non-participants

  19. Counterfactual assumption: Unobserved characteristics do not affect outcomes of interest Unobserved = things we cannot measure (e.g. ability) or things we left out of the dataset Question: how might participants differ from matched nonparticipants?

  20. 73.36 Effect = - 7.01 66.37 Matched nonparticipant Participant

  21. Can only test assumptionwith experimental data • Studies that compare both methods (because they have experimental data) find that: • unobservables often matter! • direction of bias is unpredictable! Apply with care – think very hard about unobservables

  22. V. Regression discontinuity identification strategy Applicability: When strict quantitative criteria determine eligibility Counterfactual: Nonparticipants just below the eligibility cutoff are the comparison for participants just above the eligibility cutoff

  23. Counterfactual assumption: Nonparticipants just below the eligibility cutoff are the same (in observable and unobservable ways) as participants just above the eligibility cutoff Question: Is the distribution around the cutoff smooth? Then, assumption might be reasonable Question: Are unobservables likely to be important (e.g. correlated with cutoff criteria)? Then, assumption might not be reasonable However, can only estimate impact around the cutoff, not for the whole program

  24. Example: Effect of school inputs on test scores • Target transfer to poorest schools • Construct poverty index from 1 to 100 • Schools with a score <=50 are in • Schools with a score >50 are out • Inputs transfer to poor schools • Measure outcomes (i.e. test scores) before and after transfer

  25. Non-Poor Poor

  26. Treatment Effect

  27. Applying RDD in practice: Lessons from an HIV-nutrition program • Lesson 1: criteria not applied well • Multiple criteria: hh size, income level, months on ART • Nutritionist helps her friends fill out the form with the “right” answers • Now – unobservables separate treatment from control… • Lesson 2: Watch out for criteria that can be altered (e.g. land holding size)

  28. Summary • Gold standard is randomization – minimal assumptions needed, intuitive estimates • Nonexperimental requires assumptions – can you defend them?

  29. Different assumptions will give you different results • The program: ART treatment for adult patients • Impact of interest: effect of ART on children of patients (are there spillover & intergenerational effects of treatment?) • Child education (attendance) • Child nutrition • Data: 250 patient HHs 500 random sample HHs • Before & after treatment • Can’t randomize ART so what is the counterfactual

  30. Possible counterfactual candidates • Random sample difference in difference • Are they on the same trajectory? • Orphans (parents died – what would have happened in absence of treatment) • But when did they die, which orphans do you observe, which do you not observe? • Parents self report moderate to high risk of HIV • Self report! • Propensity score matching • Unobservables (so why do people get HIV?)

  31. Estimates of treatment effects using alternative comparison groups • Compare to around 6.4 if we use the simple difference in difference using the random sample Standard errors clustered at the household level in each round. Includes child fixed effects, round 2 indicator and month-of-interview indicators.

  32. Estimating ATT using propensity score matching • Allows us to define comparison group using more than one characteristic of children and their households • Propensity scores defined at household level, with most significant variables being single-headed household and HIV risk

  33. Probit regression results • Dependent variable: household has adult ARV recipient

  34. ATT using propensity score matching

  35. Nutritional impacts of ARV treatment Includes child fixed effects, age controls, round 2 indicator, interviewer fixed effects, and month-of-interview indicators.

  36. Nutrition with alternative comparison groups Includes child fixed effects, age controls, round 2 indicator, interviewer fixed effects, and month-of-interview indicators.

  37. Summary: choosing among non-experimental methods • At the end of the day, they can give us quite different estimates (or not, in some rare cases) • Which assumption can we live with?

  38. Thank You

More Related