1 / 31

IDEV 624 – Monitoring and Evaluation

IDEV 624 – Monitoring and Evaluation. Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University. Process vs. Outcome/Impact Monitoring. Outcome Impact Monitoring Evaluation. Process Monitoring. LFM. USAID Results Framework.

tamar
Download Presentation

IDEV 624 – Monitoring and Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University

  2. Process vs. Outcome/Impact Monitoring Outcome Impact Monitoring Evaluation Process Monitoring LFM USAID Results Framework

  3. A Public Health Questions Approach to HIV/AIDS M&E Are collective efforts being implemented on a large enough scale to impact the epidemic? (coverage; impact)?Surveys & Surveillance Are we doing them on a large enough scale? Determining Collective Effectiveness OUTCOMES & IMPACTS Are interventions working/making a difference? Outcome Evaluation Studies OUTCOMES Monitoring & Evaluating National Programs Are we doing them right? Are we implementing the program as planned? Outputs Monitoring OUTPUTS What are we doing? Are we doing it right? Process Monitoring & Evaluation, Quality Assessments ACTIVITIES Are we doing the right things? What interventions and resources are needed? Needs, Resource, Response Analysis & Input Monitoring INPUTS Understanding Potential Responses What interventions can work (efficacy & effectiveness)? Efficacy & Effectiveness Studies, Formative & Summative Evaluation, Research Synthesis What are the contributing factors? Determinants Research Problem Identification What is the problem? Situation Analysis & Surveillance (UNAIDS 2008)

  4. Strategic Planning for M&E: Setting Realistic Expectations All Most Some Few* Number of Projects Input/ Output Monitoring Process Evaluation Outcome Monitoring / Evaluation Impact Monitoring / Evaluation Levels of Monitoring & Evaluation Effort *Disease impact monitoring is synonymous with disease surveillance and should be part of all national-level efforts, but cannot be easily linked to specific projects 4

  5. Monitoring Strategy • Process  Activities • Outcome/Impact  Goals and Objectives

  6. Outcome Evaluation

  7. Program vs. Outcome Monitoring • Program process monitoring: The systematic and continual documentation of key aspects of program performance that assess whether the program is operating as intended or according to some appropriate standard • Outcome monitoring: The continual measurement of intended outcomes of the program, usually of the social conditions it is intended to improve Process Monitoring A Form of Impact Evaluation

  8. What is an Outcome? • Outcome: The state of the target population or the social conditions that a program is expected to have changed • Outcomes are characteristics of the target population or social condition, and not of the program • Programs expect change but this does not necessarily mean that program targets have changed (Rossi/Lipsey/Freeman 2004)

  9. What is your project’s outcome?

  10. Outcome vs. Impact • Outcome level: Status of an outcome at some point of time • Outcome change: Difference between outcome levels at different points in time • Impact/program effect: Proportion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor (Rossi/Lipsey/Freeman 2004)

  11. (Rossi/Lipsey/Freeman 2004)

  12. Outcome vs. Impact (cont.) • Outcome level and change: • Valuable for monitoring program performance • Limited use for determining program effects • Impact/program effect: the value added or net gain that would not have occurred without the program and the only part of the outcome for which the program can honestly take credit • Most demanding evaluation task • Time-consuming and expensive

  13. Outcome Variable • Outcome variable: A measurable characteristic or condition of a program’s target population that could be affected by the actions of the program • Examples: amount of smoking, body weight, school readiness (Rossi/Lipsey/Freeman 2004)

  14. What are your project’s outcome variables?

  15. Program Impact Theory • Useful for identifying and organizing program outcomes • Expresses the outcomes of social programs as part of a logic model that connects program theory to proximal (immediate) outcomes, that are expected to lead to distal (long-term) outcomes

  16. Program Impact Theory - Examples (Rossi, Peter H et al., p. 143)

  17. Logic Model • Visual representation of the expected sequence of steps going from program service to client outcome

  18. Logic Model - Example (for a teen mother parenting program) (Rossi, P. H. et al., p. 95)

  19. Proximal vs. Distal Outcomes • Proximal (immediate) outcomes: • Usually the ones that the program has the greatest capability to effect • Often easiest to measure and to attribute to program • Distal (longer-term) outcomes • Frequently the ones of the greatest political and practical importance • Often difficult to measure and to attribute to program • Usually influenced by many factors outside of the programs control

  20. What are your project’s proximal/distal outcomes?

  21. Measuring Program Outcomes • Select most important outcomes • Take into account feasibility (e.g. distal ones may be too difficult or expensive to measure) • However, both proximal and distal outcomes can be subject of an outcome evaluation • Multidimensional outcomes often require multiple measurements ( composite measures)

  22. Monitoring Program Outcomes • Outcome monitoring: • Simplest approach to measuring program outcomes • Similar to process monitoring with the difference that the regularly collected information relates to program outcomes rather than process and performance • Requires indicators that are practical to collect routinely and that are informative with regard to program effectiveness

  23. Time O1 X O2 Time Time O1 O1 O2 X O3 O2 X X O4 O3 O5 X O6 O4 Monitoring Strategies

  24. Selecting Outcome Indicators • Need to be as responsive as possible to program effects • Include only members of target population receiving services • Not include data on beneficiaries who dropped out of the program ( service utilization issue) • The best outcome indicators, short of an impact evaluation, are: • Variables that only the program can effect • Variables that are central to the program’s mission

  25. Selecting Outcome Indicators (cont.) • Concerns with selecting outcome indicators: • “Teaching to the test”: Program staff may focus on critical outcome indicators to improve program performance on these measures, may distort program activities • “Corruptibility of indicators”: Monitoring data should be collected by outside evaluator, or with careful processes in place that prevent distortion (Role of participation?)

  26. Advantage of Outcome Monitoring • Useful and relatively inexpensive information about program effects, usually in a reasonable timeframe (compared to impact evaluation)  Mainly a technique for improving program administration, and not for assessing its impact on the social conditions it intends to benefit (Rossi/Lipsey/Freeman 2004)

  27. Limitation of Outcome Monitoring • Requires indicators that identify change and link that change to the program • However, often many outside influences on a social condition (confounding factors)  Isolating program effects may require the special techniques of impact evaluation

  28. Project Monitoring Plan

  29. Logframe TAPGR: Development Project Planning

  30. Discussion Questions

  31. Discussion Questions • How could the outcome of your program be monitored? • What are the critical outcome variables? • What outcome monitoring strategy is feasible taking into account the local implementation environment? • What are the strengths of this methodology? • What are its weaknesses? • How would you judge the quality of the collected data?

More Related