1 / 43

Workshop on Using Contribution Analysis to Address Cause-Effect Questions

2. Workshop Objectives. Understand the need to address attributionUnderstand how contribution analysis can helpHave enough information to undertake a contribution analysis on your own. 3. Outline. Dealing with attributionContribution analysisWorking a caseLevels of contribution analysisConclus

eldon
Download Presentation

Workshop on Using Contribution Analysis to Address Cause-Effect Questions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Workshop on Using Contribution Analysis to Address Cause-Effect Questions Danish Evaluation Society Conference Kolding, September 2008 John Mayne, Advisor on Public Sector Performance john.mayne@rogers.com

    2. 2 Workshop Objectives Understand the need to address attribution Understand how contribution analysis can help Have enough information to undertake a contribution analysis on your own

    3. 3 Outline Dealing with attribution Contribution analysis Working a case Levels of contribution analysis Conclusions

    4. 4 The challenge Attribution for outcomes always a challenge Strong evaluations (such as RCTs) not always available or possible A credible performance story needs to address attribution Sensible accountability needs to address attribution Complexity significantly complicates the issue What can be done? Recent example of a non-experimental situation mentioned on the internet was evaluations of the Tsunami aid that are being solicited, for example, by UNICEF.Recent example of a non-experimental situation mentioned on the internet was evaluations of the Tsunami aid that are being solicited, for example, by UNICEF.

    5. 5 The idea Based on the theory of change of the program, Buttressed by evidence validating the theory of change, Reinforced by examination of other influencing factors, Contribution analysis builds a reasonably credible case about the difference the program is making

    6. 6 The typical context A program has been funded to achieve intended results The results have occurred, perhaps more or less It is recognized that several factors likely ‘caused’ the results Need to know what was the program’s role in this

    7. 7 Two measurement problems Measuring outcomes Linking outcomes to actions (activities and outputs), i.e. attribution Are we making a difference with our actions?

    8. 8 Attribution Outcomes not controlled; are always other factors at play Conclusive causal links don’t exist Are trying to understand better the influence you are having on intended outcomes Need to understand the theory of the program, to establish plausible association Something like contribution analysis can help This is what practical attribution is all about.This is what practical attribution is all about.

    9. 9 The need to say something Many evaluations and most public reporting are silent on attribution Credibility greatly weakened as a result In evaluations, in performance reporting and in accountability, something be said about attribution

    10. 10 Proving Causality The gold standard debate (RCTs et al) Intense debate underway, especially in development impact evaluation Some challenge on RCTs (e.g. Scriven) Does appear if RCTs have limited applicability Then what do we do? Need to know better when we can use RCTs: what circumstances, what settings? Need to know better when we can use RCTs: what circumstances, what settings?

    11. 11 Proving Causality AEA and EES: many methods capable of demonstrating scientific rigour Methodological appropriateness for given evaluation questions Causal analysis: auto mechanic, air crashes, forensic work, doctors—Scriven’s Modus Operandi approach Always other factors at play Conclusive causal inks don’t exist Are trying to understand better the influence you are having on intended outcomes Plausible association Always other factors at play Conclusive causal inks don’t exist Are trying to understand better the influence you are having on intended outcomes Plausible association

    12. 12 Theory-based evaluation Reconstructing the theory of the program Assess/test the credibility of the micro-steps in the theory (links in the results chain) Developing & confirming the results achieved by the program I am building here on the increasingly used theory-based approaches to evaluation.I am building here on the increasingly used theory-based approaches to evaluation.

    13. 13 Contribution analysis: the theory There is a postulated theory of change The activities of the program were implemented The theory of change is supported by evidence Other influencing factors have been assessed & accounted for Therefore The program very likely made a contribution 1. There is a reasoned postulated theory of change for the program. It makes sense, it is plausible, and is agreed by at least some of the key players. If there is no ToC, experiment! 2. The activities of the program were implemented. 3. The theory of change—or key elements thereof— is supported by and confirmed by evidence, both of experts and of facts: the chain of expected results occurred. 4. Other influencing factors have been assessed and either shown not to have made a significant contribution, or their relative role in contributing to the desired result has been recognized. Then can say with some confidence that the program has indeed contributed to the observed desired results. Seeking plausible association. Trying to reduce uncertainty about the effects of the program.1. There is a reasoned postulated theory of change for the program. It makes sense, it is plausible, and is agreed by at least some of the key players. If there is no ToC, experiment! 2. The activities of the program were implemented. 3. The theory of change—or key elements thereof— is supported by and confirmed by evidence, both of experts and of facts: the chain of expected results occurred. 4. Other influencing factors have been assessed and either shown not to have made a significant contribution, or their relative role in contributing to the desired result has been recognized. Then can say with some confidence that the program has indeed contributed to the observed desired results. Seeking plausible association. Trying to reduce uncertainty about the effects of the program.

    14. 14 Steps in Contribution Analysis 1. Set out the attribution problem to be addressed 2. Develop the postulated theory of change 3. Gather the existing evidence on the ToC 4. Assemble & assess the contribution story 5. Seek out additional evidence 6. Revise & strengthen the contribution story 7. Develop the complex contribution story I set out here the basic steps in a CA. The process is iterative and best developed over time. I set out here the basic steps in a CA. The process is iterative and best developed over time.

    15. 15 1. Set out the attribution problem Acknowledge the need to address attribution Scope the attribution problem What is really being asked What level of confidence is needed? Explore the contribution expected What are the other influencing factors? How plausible is a contribution?

    16. 16 Cause-Effect Questions Traditional attribution questions Has the program caused the outcome? How much of the outcome is caused by the program? Contribution questions Has the program made a difference? How much of a difference?

    17. 17 Cause-Effect Questions Management questions Is it reasonable to conclude that the program made a difference? What conditions are needed to make this type of program succeed? Why has the program failed?

    18. 18 building an evaluation office contribution story Evaluation aim is to ‘make a difference’ (an outcome) e.g., improvements in management and reporting, more cost-effective public service, enhanced accountability, etc. Evaluation products (outputs): Evaluations and evaluation reports Advice and assistance

More Related