120 likes | 199 Views
Learning through Impact Evaluation. Seminar on M&E – June 6-7, 2005. Motivation.
E N D
Learning through Impact Evaluation Seminar on M&E – June 6-7, 2005
Motivation ‘Since it is difficult to distinguish the good from the bad prophet, we must be suspicious of all prophets: it is better to avoid revealed truths, even if we feel exalted by their simplicity and splendor, even if we find them comfortable because they come at no cost. It is better to be content with more modest and less inspiring truths that are laboriously conquered, step by step, with no shortcuts, by studying, discussion and reasoning, and that can be verified and demonstrated'’ Primo Levi
Argument • Impact evaluations (IE) – comparison of outcomes between beneficiaries of a program and a counterfactual group – are the primary way of verifying if an approach works or does not (or under what conditions) • Series of IE can provide performance benchmarks and cost-effectiveness assessments • With that information, you can monitor outcomes with acceptable level of comfort • IE are the basis for what is known as ‘Evidence-based Policy Design’ (EBPD)
Objective of the presentation • Give a bird’s eye view of the status of IE work in the development world (developing countries and development agencies) and • Explore the potential for a more systematic approach to development impact evaluation
There is more IE than many think: • LA ahead of other regions (Progresa effect?) • IE within international programs: some WB data • HD LAC portfolio: 55% of projects, 80% of funds evaluated • Database of IE (63 evaluations, 42 in LAC) • Independent evaluation: still heavily northern but with increasing southern following • Diversity of methods (see Ravallion forthcoming) • Broader range of programs
But still far from full potential: • Shanghai as a case study: less than 20% evaluated. Twice as many could have been easily evaluated. • We know a lot about a few things (e.g. CCTs) and little about a lot of things (e.g. effective delivery of social and infrastructure services) • We seldom use results from IE as benchmarks for performance in other programs • We don’t use results from IE for cost-effectiveness assessment • Few countries using EBPD
Learning from experience at a global scale: What would it take? • Alternative approaches to addressing specific challenges (e.g. contracting teachers & decentralized school management more kids in school, learning more?) experimented and evaluated across countries. • Meta-analyses of the cost-effectiveness of such approaches, preferably with sufficient country context variation.
A big challenge… • Evaluations as international public good • Individual evaluations are non-rival in consumption. Increasingly non-excludable. • Meta-analyses cross-country by definition • Stakeholder analysis: even well-intentioned individual policy-makers in developing countries face a tough cost-benefit equation • And significant technical/implementation difficulties…
…requiring effective coordination to: • Subsidize cost of individual evaluations (data collection?) • Build statistical and evaluation capacity • Define priority themes aligning donor and developing country preferences • Improve incentives (donors and countries) for experimentation and learning
An important step from our side: Development IMpact Evaluation -DIME • The World Bank initiated an effort to: • Support a larger number of evaluations of the programs we finance • Focus on areas/challenges for which we observe high demand from countries and large presence in our work program. First batch: • Conditional cash transfers • School-based management • Information generation and dissemination in improving education outcomes • Alternative teacher contracting schemes • Slum upgrading programs • Set the basis for eventual meta-evaluations that will serve the whole development community
Towards a global learning partnership • Other Development Agencies are or will be gradually joining efforts, but… • How to ensure the voice of the south is heard? • Who will define the evaluation agenda? (What experimentation and evaluation is worth doing and under what conditions) • Who will conduct the evaluations and assess results? • A lot of unanswered questions… • In answering, let’s not forget Primo Levi:
It is better to be content with more modest and less inspiring truths that are laboriously conquered, step by step, with no shortcuts, by studying, discussion and reasoning, and that can be verified and demonstrated'’