250 likes | 402 Views
Mattea Stein Quasi Experimental Methods I. What we know so far. Aim: We want to isolate the causal effect of our interventions on our outcomes of interest Use rigorous evaluation methods to answer our operational questions
E N D
What we know so far Aim: We want to isolate the causal effect of our interventions on our outcomes of interest • Use rigorous evaluation methods to answer our operational questions • Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap) • What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods
Non-experimental methods • Can we find a plausible counterfactual? • Natural experiment? • Every non-experimental method is associated with a set of assumptions • The stronger the assumptions, the more doubtful our measure of the causal effect • Question our assumptions • Reality check, resort to common sense!
Example: Matching Grants Program • Principal Objective • Increase firm productivity and sales • Intervention • Matching grants distribution • Non-random assignment • Target group • SMEs with 1-10 employees • Main result indicator • Sales
(+) Impact of the program Illustration: Matching Grants - Randomization (+) Impact of externalfactors 5
Illustration: Matching Grants – Difference-in-difference « Before» differencebtwn participants and nonparticipants « After » differencebtwn participants and non-participants >> What’s the impact of our intervention? 6
Difference-in-Differences Identification Strategy (1) Counterfactual: 2 Formulations that say the same thing • Non-participants’ sales after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) • Participants’ sales before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors) • 1 and 2 are equivalent
Impact=0.4 “After”-difference: P08-NP08=1.4 “Before”-difference: P07-NP07=1.0
Difference-in-DifferencesIdentification Strategy (2) Underlying assumption: Without the intervention, sales for participants and non participants would have followed the same trend >> Graphic intuition coming…
Impact=0.4 “After”-difference: P08-NP08=1.4 “Before”-difference: P07-NP07=1.0
True Impact=-0.3 Estimated Impact =0.4
Summary • Assumption of same trend very strong • 2 groups were, in 2007, producing at very different levels • Question the underlying assumption of same trend! • When possible, test assumption of same trend with data from previous years
Questioning the Assumption of same trend: Use pre-pr0gram data >> Reject counterfactual assumption of same trends !
Questioning the Assumption of same trend: Use pre-pr0gram data >>Seems reasonable to accept counterfactual assumption of same trend ?!
Caveats (1) • Assuming same trend is often problematic • No data to test the assumption • Even if trends are similar the previous year… • Where they always similar (or are we lucky)? • More importantly, will they always be similar? • Example: Other project intervenes in our nonparticipant firms…
Caveats (2) • What to do? >> Be descriptive! • Check similarity in observable characteristics • If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, patience, etc)
Matching Method + Difference-in-Differences (1) Match participants with non-participants on the basis of observable characteristics Counterfactual: • Matched comparison group • Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, matched participants and nonparticipants share the same observable characteristics (by construction) • Estimate the effect of our intervention by using difference-in-differences
Matching Method (2) Underlying counterfactual assumptions • After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR • Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest
How do we do it? • Design a control group by establishing close matches in terms of observable characteristics • Carefully select variables along which to match participants to their control group • So that we only retain • Treatment Group: Participants that could find a match • Comparison Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!
Implications • In most cases, we cannot match everyone • Need to understand who is left out • Example Matched Individuals Portion of treatment group trimmed out Nonparticipants Participants Score Wealth
Conclusion (1) • Advantage of the matching method • Does not require randomization
Conclusion (2) • Disadvantages: • Underlying counterfactual assumption is not plausible in all contexts, hard to test • Use common sense, be descriptive • Requires very high quality data: • Need to control for all factors that influence program placement/outcome of choice • Requires significantly large sample size to generate comparison group • Cannot always match everyone…
Summary • Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!) • Non-experimental methods require assumptions that must be carefully tested • More data-intensive • Not always testable • Get creative: • Mix-and-match types of methods! • Address relevant questions with relevant techniques
Thank you Financial support from: Bank Netherlands Partnership Program (BNPP), Bovespa, CVM, Gender Action Plan (GAP), Belgium & Luxemburg Poverty Reduction Partnerships (BPRP/LPRP), Knowledge for Change Program (KCP), Russia Financial Literacy and Education Trust Fund (RTF), and the Trust Fund for Environmentally & Socially Sustainable Development (TFESSD), is gratefully acknowledged.