310 likes | 470 Views
The Intersection of Performance Measurement and Program Evaluation: Searching for the Counterfactual Moscow 2011 . Douglas J. Besharov School of Public Policy University of Maryland. Performance Management. Efficiency studies (“outputs”) How much does the program cost?
E N D
The Intersection of Performance Measurement and Program Evaluation:Searching for the Counterfactual Moscow 2011 Douglas J. Besharov School of Public Policy University of Maryland
Performance Management • Efficiency studies (“outputs”) • How much does the program cost? • Monetary, nonmonetary, and opportunity costs • Could it be delivered more efficiently? • Effectiveness studies (“outcomes” and “impacts”) • Does the program achieve its goals? • Could it be more effective? • Both require a comparison, or a “counterfactual” 8 Douglas J. Besharov, June 2011
2 Douglas J. Besharov, June 2011
It Matters How Children Are Raised 3 Douglas J. Besharov, June 2011
Ineffective Early Childhood Education Programs IHDP 1985-1988 CCDP 1990-1995 Early Head Start 1996-2008 • Low-birth weight, pre-term infants and their parents • Home visits, parenting education, and early childhood education • $20,400 per child per year • No significant impacts, initial IQ gains fade • Poor children under age 1 and their parents • Case management, parenting education, early childhood education, and referrals to community-based services • $19,000 per family per year ($60 million annually) • No significant impacts • Poor children ages 0-2 and their parents • Child development, parenting education, child care, and family support services • $18,500 per child per year ($700 million annually) • No significant impacts 4 Douglas J. Besharov, June 2011
Program Improvement, not Program Dismantling “The closest thing to immortality on this Earth is a federal government program.” – Ronald Reagan Douglas J. Besharov (October 21, 2008)
Performance Management • Leadership, • Management, and • Measurement 6 Douglas J. Besharov, June 2011
Performance Management • Leadership, • Management, and • Measurement 7 Douglas J. Besharov, June 2011
Point #1 Counterfactuals are needed for accurate performance measurement. 9 Douglas J. Besharov, June 2011
Impact Evaluations Take Too Long to Manage Performance • Head Start Impact Study (2010): 10 years and running • Moving to Opportunity Study (1994): 17 years and running • Employment Retention and Advancement evaluation (1998): 13 years and running • Building Strong Families Project (2002): 9 years and running • National Job Corps Study (1993): 15 years to complete 10 Douglas J. Besharov, June 2011
Logic Model for Job Training Programs Problem: Some unemployed do not have the necessary skills to obtain and keep well-paying employment, leading to lower income, greater use of government benefits, and a weaker economy. Theory: If government provides job training to the unemployed, than the unemployed will receive job skills necessary for good jobs, increased earnings, and a stronger economy Design: (1) Job search/job readiness training, (2) skills training, (3) in a classroom. Outcomes Job search skills Technical job skills Interpersonal skills Outputs Hours of training instruction Hours of practice Staff admin Skill certificates • Inputs • Training facilities • Staff • Funding • Client characteristics Activities Job search/job readiness training Classroom instruction Job skills training Proximal Impacts Earnings Employment UI/Welfare Receipt Crime Distal Impacts Higher lifetime earnings/ employment Lower poverty Stronger economy External Community and Societal Context
Point #2 Carefully applied, a measured outcome coupled with a logic model’s theory of change —often buttressed by other evidence— can serve as a more timely and more useful performance measure than a formal evaluation of long-term impacts. 12 Douglas J. Besharov, June 2011
When “Outputs” Imply “Outcomes” • There is no output, so no positive outcome can be reasonably predicted. • The output itself is sufficiently suggestive of a likely outcome. • The output is produced at such a prohibitively high cost, that, regardless of its likely outcome, it does not meet cost-effectiveness or cost-benefit tests. 13 Douglas J. Besharov, June 2011
Feasible “Outcome” Evaluations • Evaluations of on-going programs • Rolling randomized experiments • Pre-post studies (with embedded counterfactual) • Regression-discontinuity designs • Evaluations of specific program “improvements” • Randomized experiments • Pipeline studies (or rolling implementation) • Interrupted time series studies 14 Douglas J. Besharov, June 2011
A Clear Interrupted Time Series 15 Douglas J. Besharov, June 2011
Circling the Wagons 16 Douglas J. Besharov, June 2011
Accountability Systems • Top-down administrative and funding incentives -- together with -- • Bottom-up voucher-like programs 17 Douglas J. Besharov, June 2011
2 Douglas J. Besharov, June 2011
When “Outcomes” Imply “Impacts” • When the desired impact is reasonably predicted to follow from the measured outcome 10 Douglas J. Besharov, June 2011
Ineffective Job Training Programs Job Corps JTPA 1987-1994 WIA (dislocated) 2003-2005 • Low-income youth • Low-income adults, dislocated workers, and out-of school youth • Classroom training, on-the-job training, job search assistance, adult basic education, and other services • $2,400 per participant for 3-4 months ($60 million annually) • Women: Small initial gains in earnings, employment, and GED receipt fade by 5 yrs • Men: Small initial gains in earnings fade by 5 yrs, no other impacts • Youth: On significant impacts • cts 2 Douglas J. Besharov, June 2011