130 likes | 221 Views
On the Bias in Estimating Program Effects Using Clinic Based Data. Ashu Handa University of North Carolina at Chapel Hill Mari-Carmen Huerta London School of Economics. Objective. Can clinic based data give good estimates of program impact?
E N D
On the Bias in Estimating Program Effects Using Clinic Based Data Ashu Handa University of North Carolina at Chapel Hill Mari-Carmen Huerta London School of Economics
Objective • Can clinic based data give good estimates of program impact? • Most large scale nutrition interventions do not have accompanying social experiment • Nature of program; costs; political constraints • Clinic based data on nutritional status is available virtually everywhere • Useful to know whether these data can be used for program evaluation
Approach • Compare non-experimental and experimental estimates of program impact • Intervention: Progresa • Experimental estimates of impact on child height already exist • Derive non-experimental estimate using data on beneficiaries from health clinics • Compare the two estimates to assess bias
Why Experiments? • Randomly selected comparison group allows for • Control of observed characteristics that might affect outcome • Control of unobserved characteristics that might affect outcome (selection bias) • Less of a problem in a mandatory program such as Progresa • Bias caused by using non-experimental comparison group may be negative or positive
Summary of Experimental Results • Gertler et.al. and Behrman & Hoddinott (BH) • Use same basic data set; two measurements 12-16 months apart • Sample is kids age 12-36 months at baseline • Gertler: Estimates growth in height in cms • BH: Estimate growth in height measured by z-score • Both include child, household and community level control variables • Both report positive and significant estimates in the range of 15% of mean growth (1 cm per year) • Gertler: Only impact is on kids 12-17 months of age
Clinic Based Sample • Individual data from all Progresa clinics between end 1997 and end 1999 • Different dates of incorporation • Use to identify program impact • Select kids with two measures of height taken 6-18 months apart (median=13 months) • Measure 1 in early 1998; measure 2 in early 1999 • Estimate growth in height measured by z-score (same as BH) • Kids 0-48 months of age; no control variables
Identification Strategy • Use child’s incorporation date to define length of exposure to program (define 9 groups) • Selection problem – ‘control’ group initially healthier • Growth specification will eliminate some bias
Non-experimental estimates relative to benchmark • 20% for 12-36 age group • 25% for 12-48 age group • 40% for 12-36 age group using listed treatment
Discussion • Clinic based estimates significantly lower • 20% to 40% depending on specification • Downward bias due to measurement error • Listed treatment not equal to actual treatment • Listed treatment measures average impact of total program—different concept • Omitted variable bias reduces estimate • Omitted control variables (not used in clinic-based study) positively related to participation but negatively related to growth • Leads to downward bias in non-experimental impact
So how reliable is the non-experimental estimate? • The glass is half empty • Estimates are positive, but significantly lower than benchmark • Leads to conclusion that program less effective than it actually is • The glass is half full • Gives positive and significant estimates • Cost of measuring impact is virtually zero • Understanding program operation allows assessment of nature of bias • How close do we really need to be for policy?