851 likes | 1.2k Views
Module 20. Monitoring & Evaluation in NIE. Learning objectives. Be familiar with the basic concepts and main characteristics of monitoring and evaluation Understand the differences between various kinds of evaluations Explain the different kinds of indicators
E N D
Module 20 Monitoring & Evaluation in NIE
Learning objectives • Be familiar with the basic concepts and main characteristics of monitoring and evaluation • Understand the differences between various kinds of evaluations • Explain the different kinds of indicators • Describe the very basics of a ‘log frame’ • Optional: Be familiar with the monitoring and evaluation of CMAM interventions
Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION
Monitoring &Evaluation What is M&E? *
M&E performance efficiency outputs A WASP NEST………? effectiveness appropriateness outcomes Quantitative indicators Qualitative indicators target Logframes impact assessment DO NO HARM coverage INPUTS connectedness accountability timeliness *
Definition Monitoring ‘The systematic and continuous assessment of the progress of a piece of work over time….’ ‘To continuously measure progress against programme objectives and check on relevance of the programme’ It involves collecting and analysing data/information It is NOT only about PROCESS *
Purpose of monitoring • to document progress and results of project • to provide the necessary information to Management for timely decision taking and corrective action (if necessary) • to promoteaccountability* to all stakeholders of a project (to beneficiaries, donors, etc) *
Information collected for monitoring must be: • Useful and relevant • Accurate • Regular • Acted upon • Shared • Timely *
Monitoring is an implicit part of an evaluation. It is often done badly: • Routine data collection not done routinely! • Data collection done poorly • Information not processed/used in a timely manner • Focus only on process indicators and neglecting (lack of) preliminary impact *
Can you give examples of Monitoring in your current work? For example - From a CMAM programme? • From a Micronutrient programme? • From a General Food Distribution? • From a Health programme? • From a Livelihoods programme? *
Monitoring Monitoring compares intentions with results It guides project revisions, verifies targeting criteria and whether assistance is reaching the people intended. It checks the relevance of the project to the needs. It integrates and responds to community feedback It enhances transparency and accountability
Difference between Monitoring of • Process/activities • Impact/results *
Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION *
Definitions Evaluation The aim is to determine relevance and fulfilment of objectives, as well as efficiency, effectiveness, impact and sustainability of a project. It involves the objective assessment of an ongoing or completed project/programme, its design, implementation and results. *
There has been an increased focus on evaluation of humanitarian action as part of efforts to improve quality and standards *
Evaluation It aims to • Improve policy and practice • Enhance accountability *
Evaluations are done when / because: • Monitoring highlightsunexpected results • More information is needed for decision making • Implementation problemsor unmet needsare identified • Issues of sustainability, cost effectiveness or relevance arise • Recommendations for actions to improve performance are needed • Lessons learningare necessary for future activities
Evaluations • Evaluation involves the same skills as assessment and analysis • Evaluation should be done impartially and ideally by externalstaff • Evaluation can also occur during (e.g. mid-term) and after implementation of the project Why? One of the most important sources of information for evaluations is data used for monitoring *
The OECD-DAC criteriaOrganisation for Economic Co-operation and Development • The Development Assistance Committee (DAC) evaluation criteria are currently at the heart of the evaluation of humanitarian action. • The DAC criteria are designed to improve evaluation of humanitarian action. *
Evaluation looks at • Relevance/Appropriateness: Doing the right thing in the right way at the right time. • Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? • Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? • Coverage: Who has been reached by the intervention, and where: linked to effectiveness? • Efficiency:Were the results delivered in the least costly manner possible? • Effectiveness:To what extent has the intervention achieved its objectives? • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. *
Evaluation looks at • Relevance/Appropriateness: Doing the right thing in the right way at the right time. • Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? • Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? • Coverage: Who has been reached by the intervention, and where: linked to effectiveness? • Efficiency:The extent to which results have been delivered in the least costly manner possible. • Effectiveness:The extent to which an intervention has achieved its objectives – • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. *
Example on General Food Distribution • Relevance/Appropriateness: Doing the right thing in the right way at the right time. Was food aid the right thing to do, not cash? • Connectedness: Are their detrimental effects of the intervention on long run? Did food aid lower food prices? Did local farmers suffer from that?
Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Were those that needed food aid indeed reached? • Efficiency: Were the results delivered in the least costly manner possible? Was it right to import the food or should it have been purchased locally? Could the results have been achieved with less (financial) resources? Food aid was provided, would cash have been more cost-effective?
Effectiveness:To what extent has the intervention achieved its objectives? Did food aid avoid undernutrition? (assuming it was an objective) • Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. Did the food aid avoid people becoming displaced? Did the people become dependent on food aid?
Impact: • Very much related to the general goal of the project • Measures both positive and negative long-termeffects, as well as intended and unintended effects. GFD: did it lower general food prices with long-term economic consequences for certain groups ? Were people that received food aid attacked because of the ration? (therefore more death…?) • Need for baseline information!!!! (to measure results against….)
To evaluate projects well is a real skill! And you often need a team… *
M&E in emergencies? YES Any project without Monitoring and/or Evaluation is a BAD project *
Help! *
Evaluations in Humanitarian Context • Single-agency evaluation (during/after project) • There is an increasing move towards: • Inter-agency evaluations: the objective is to evaluate responses as a whole and the links between interventions • Real-time evaluations: carried out 8 to 12 weeks after the onset of an emergency and are processed within one month of data collection
Real-time evaluations (1) • WHY? Arose from concern that evaluations came too late to affect the operations they were assessing • Various groups of organizations aim to undertake real-time evaluations • Same purpose as any other evaluation • Common characteristics: • Takes place during the course of implementation • In a short time frame *
Real-time evaluations (2) • It is an improvement-oriented review; it can be regarded more as an internal function than an external process. • It helps to bring about changes in the programme, rather than just reflecting on its quality after the event. • A real-time “evaluator” is a “facilitator”, working with staff to find creative solutions to any difficulties they encounter. • It helps to get closer to the people affected by crisis, and this enables to improve accountability to ‘beneficiaries’. *
Monitoring & Evaluation systems • Main components of M&E systems: • M&E work plan for data collection and analysis, covering baseline, on-going M&E • Logical framework, including indicators and means/source of verification • Reporting flows and formats • Feedback and review plan • Capacity building design • Implementation schedule • Human resources and budget
Indicators • An indicator is a measure that is used to show change in a situation, or the progress in/results of an activity, project, or programme. • Indicators: • enable us to be “watchdogs”; • are essential instruments for monitoring and evaluation. • are objectivelyverifiable measurements
What are the Qualities of a Good Indicator? • Specific • Measurable • Achievable • Relevant • Time-bound The Sphere Project provides the most accepted indicators for nutrition and food security interventions in emergencies: see Module 21. And there is also the SMART initiative…. Standardised Monitoring and Assessment in Relief and Transition Initiative - interagency initiative to improve the M&E of humanitarian assistance
Types of indicators Indicators exist in many different forms: Examples? • Direct indicators correspond precisely to results at any performance level. • Indirect or "proxy" indicators demonstrate the change or results if direct measures are not feasible. Direct Indirect / proxy • Indicators are usually quantitative measures, expressed as percentage or share, as a rate, etc. • Indicators may also be qualitative observations. Qualitative Quantitative Global / standardised • Standardised global indicators are comparable in all settings. • Other indicators tend to be context specific and must be developed locally. Locally developed
Impact Outcome Output Input
Impact Related to Goal Outcome Related to Objectives (or Purposes) Output Related to Outputs Input Related to Activities/Resources
Impact Malnutrition rates amongst young children reduced Related to Goal % of young children getting appropriate complementary food Outcome Related to Objectives (or Purposes) X number of mothers know about good complementary food and how to prepare that Output Related to Outputs Nutritional education to mothers on complementary food Input Related to Activities/Resources *
What is a Log Frame? The logical framework or logframe is an analytical tool used to plan, monitor, and evaluateprojects. ? ? ? ? Victim of a log frame?
Log Frames IMPACT OUTCOME INPUTS
? ? ? ? Impact ? Outcome Outcome Outcome ? Output Output Output Output Output Output ? ? ? ? ? ? Impact ? Outcome Output ? Output Output ? INPUTS ? ?
Other terms that can be found in a logframe: The means of verificationof progress towards achieving the indicators highlights the sources from where data is collected. The process of identifying the means of verification at this stage is useful as discussions on where to find information or how to collect it often lead to reformulation of the indicator. Assumptions are external factors or conditions that have the potential to influence the success of a programme. They may be factors outside the control of the programme. The achievement of a programme’s aims depends on whether or not assumptions hold true or anticipated risks do not materialise. 14-Sep-14 48
logical framework for M&E If the OBJECTIVES are produced, then this should contribute to the overall GOAL If OUTPUTS/RESULTS are produced, then the OBJECTIVES are accomplished If adequate ACTIVITIES are conducted, then OUTPUT/RESULTS can be produced If adequate RESOURCES/INPUTS are provided; then activities can be conducted
Activities versus Results Completed activities are not results. • e.g. a hospital was built, does not mean that injured and sick people can be treated in the hospital, maybe the hospital has no water and the beds have not been delivered. Results are the actual benefits or effects of completed activities: • e.g. Injured and sick people have access to a fully functional health facility. *