380 likes | 503 Views
Methodological issues in assessing the impact of collaborative agricultural R&D. Patricia Rogers, Royal Melbourne Institute of Technology Jamie Watts, Institutional Learning and Change Initiative International Workshop on
E N D
Methodological issues in assessing the impact of collaborative agricultural R&D Patricia Rogers, Royal Melbourne Institute of Technology Jamie Watts, Institutional Learning and Change Initiative International Workshop on Methodological Innovations in Impact Assessment of Agricultural Research Brasilia, Brazil November 12, 2008
Overview • Rationale for collaborative agricultural research and development • Different aspects of interventions – simple, complicated, complex • Tasks in impact evaluation – deciding impacts, measuring/describing impacts, causal inference, using results • Particular issues and options for impact evaluation of complicated and complex interventions
1. Rationale for collaborative agricultural research Nature June 2008 Special issue on translational research
Nature June 2008 Special issue on translational research [Nobel laureate Sydney] Brenner is one of many scientists challenging the idea that translational research is just about carrying results from bench to bedside, arguing that the importance of reversing that polarity has been overlooked. “I’m advocating it go the other way,” Brenner said.
Visualising the connection between laboratory research and practice research Tabak, 2005 National Institute of Dental and Crano-Facial Research, National Institutes of Health
Collaborative agricultural research Some reasons to engage intended end-users in agricultural research and development: • Increase researchers’ understanding of local issues • Improve the relevance of research to local conditions • Incorporate local knowledge into research • More effectively reach women and the poor • Increase uptake and appropriate adaptation
2. Different aspects of interventionswhich may need different impact evaluation methods Simple aspects that can be tightly specified and standardized and that work the same in all places Complicated aspects that are part of a larger multi-component impact pathway Complex aspects that are highly adaptive, responsive and emergent
Formulae are critical and necessary Sending one rocket increases assurance that next will be ok High level of expertise in many specialized fields + coordination Rockets similar in critical ways High degree of certainty of outcome Formulae have only a limited application Raising one child gives no assurance of success with the next Expertise can help but is not sufficient; relationships are key Every child is unique Uncertainty of outcome remains Following a RecipeA Rocket to the MoonRaising a Child Simple Complicated Complex • The recipe is essential • Recipes are tested to assure replicability of later efforts • No particular expertise; knowing how to cook increases success • Recipes produce standard products • Certainty of same results every time (Diagram from Zimmerman 2003)
3. Tasks in impact assessment DECIDE impacts to be included in assessment B)MEASURE or describe impacts C)ANALYSE causal contribution of intervention and other factors D)SUPPORT USE Each of these tasks requires appropriate methods and involves values and evidence
Examples of increasing attention to impact assessment/evaluation in international development • Center for Global Development producers of ‘When Will We Ever Learn?’ report (WWWEL) that argued for more use of RCTs (Randomised Control Trials) • NONIE –Network Of Networks on Impact Evaluation all UN agencies, all multilateral development banks and all international aid agencies of OECD countries supporting better quality impact evaluation, including sharing information and producing Guidelines for Impact Evaluation • 3IE – the International Initiative on Impact Evaluation new organisation funding and promoting rigorous impact evaluation • Poverty Action Lab Stated purpose is to advocate for the wider use of RCTs • European Evaluation Society formal statement cautioning against inappropriate use of RCTs
What is impact? …the positive and negative, primary and secondary long-term effects produced by a developmentintervention, directly or indirectly, intended or unintended. These effects can be economic, socio-cultural, institutional, environmental, technological or of other types. DAC definition
A. Decide impacts to include Need to: Include different dimensions – eg not just income but livelihoods Include the sustainability of these impacts, including environmental sustainability Not only focus on stated objectives – also unintended outcomes (positive and negative) Recognise the values of different stakeholders in terms of Desirable and undesirable impacts Desirable and undesirable processes to achieve these impacts Desirable and undesirable distribution of benefits Identify the ways in which these impacts are understood to occur and what else needs to be included in the analysis
Deciding on impacts to include in impact assessment of collaborative R&D • Collaborative R&D will likely create expectations of collaborative evaluation approaches (including deciding on impacts) • Power or capacity imbalances among collaborators should be leveled out to encourage active participation
Decide impacts to include. Some approaches: Program theory (impact pathway) - possibly developing multiple models of the program, eg Soft Systems, negotiate boundaries (eg Critical Systems Heuristics) Participatory approaches to values clarification –eg Most Significant Change
B. Gather evidence of impacts Need to: Balance credibility (especially comprehensiveness) and feasibility (especially timeliness and cost) Prioritise which impacts (and other variables) will be studied empirically and to what extent Deal with time lags before impacts are evident Avoid accidental or systematic distortion of level of impacts
Gather evidence of impacts Some approaches: Program theory (impact pathway) – identify short-term results that can indicate longer-term impacts Participatory approaches – engaging community in evidence gathering to increase reach and engagement Real world evaluation – mixed methods, triangulation, making maximum use of existing data, strategic sampling, rapid data collection methods
Analyse causal contribution or attribution Need to: Avoid false negatives (erroneously thinking it doesn’t work) and false positives (erroneously thinking it does work) Systematically search for disconfirming evidence and analysis of exceptions Distinguish between theory failure and implementation failure Understand the contribution of context: implementation environment, participant characteristics and other interventions
Intervention is both necessary and sufficient to produce the impact ‘Silver bullet’ simple impacts Impact No impact Intervention No intervention
Non-linear effects An intervention might: • Have positive impacts at some levels and negative at others (more is not always better) • Have effects only at certain thresholds
Causal packages An intervention might be: • Not necessary – other pathways might lead to the same outcome (but • Not sufficient – other factors might need to be in place (including favourable context – implementation environment or participant characteristics) Therefore differential impacts must be examined not as an optional extra but as an integral part of analysis.
Causal packages ‘Jigsaw’ complicated impacts Intervention Favourable context Impacts
Intervention is necessary but not sufficient to produce the impact ‘Jigsaw’ complicated impacts Impact No impact Intervention Favourable context Intervention Unfavourable context
Example of causal package FINDING: If two potted plants are randomly assigned to either a treatment group that receives daily water, or to a control that receives none, and both groups are placed in a dark cupboard, the treatment group does not have better outcomes than the control. CONCLUSION: Watering plants is ineffective in making them grow.
Limitations of RCTs for “jigsaws” When schools in Kenya were randomly assigned to either a treatment group that received flip chart teaching aids, or to a control that received none, the treatment group did not have better outcomes than the control. CONCLUSION: Flip charts are ineffective. Glewe et al (2004) Retrospective vs. prospective analyses of school inputs: the case of flip charts in Kenya Journal of Development Economics, 2004, vol. 74, issue 1, pages 251-268
Better ways of building evidence about “jigsaws” British fatality rate corrected for miles driven and with seasonal variations removed. (Source: Ross, Campbell & Glass, 1970, in Glass, 1997)
Change in rate of road fatalities on Fri and Sat nights Data for Fri night/Sat am and Sat night/Sun am (Source: Ross, Campbell & Glass, 1970, in Glass, 1997)
Intervention is sufficient but not necessary to produce the impact ‘Parallel’ complicated impacts Impact Impact Intervention No intervention Alternative activity
Limitations of RCTs when multiple paths exist ‘A US program to assist poor families through social service visits found that families receiving the program experienced improvements in their well-being —but so did the families that were randomly assigned to a control group that did not receive the visits (St. Pierre and Layzer 1999). “[As this case shows], a good study helps avoid spending funds on ineffective programs and redirects attention to improving designs or to more promising alternatives.” (Center for Global Development, When Will We Ever Learn?) BUT IS THIS A VALID CONCLUSION FROM THE STUDY?
Limitations of RCTs when multiple paths exist • Many control group families were able to obtain services on their own (‘contamination’) – • (Information contained in the 1999 report of the evaluation but not in the WWWEL report) • Therefore lack of difference in the outcomes between treatment and control group does not mean the program was ineffective but that there was an alternative path to the outcome. • Good evaluation would compare these alternatives.
Analyse causal contribution or attribution Some approaches: Addressing through design eg experimental designs (random assignment) and quasi-experimental designs (construction of comparison group eg propensity scores) Addressing through data collection eg participatory Beneficiary Assessment, expert judgement Addressing through iterative analysis and collection eg Contribution Analysis, Multiple Levels and Lines of Evidence (MLLE), List of Possible Causes (LOPC) and General Elimination Methodology (GEM), systematic qualitative data analysis, realist analysis of testable hypotheses
D. Report synthesis and support use Need to: Provide useful information to intended users Provide a synthesis that summarised evidence and values Balance overall pattern and detail Assist uptake/translation of evidence
Developmental Evaluation: Emerging Approach for complexity MQ Patton May, 2008 Evaluation processes support program, product, staff and/or organizational development The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation and intentional change. The evaluator's primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and facilitate data-based decision-making in the developmental process. Capacity to learn might be more relevant than specific results
Report synthesis and support use Some approaches: Use focus -Utilization-focused evaluation - Identification and involvement of intended users from the start Synthesis - Qualitative Weight and Sum and other techniques to determine overall worth Reporting - Layered reports (1 page, 5 pages, 25 pages); Scenarios showing different outcomes in different contexts; Workshopping report to support knowledge translation Developmental Evaluation Translational Research…an emerging approach