1 / 29

A review of Impact Evaluations Conducted in SA

A review of Impact Evaluations Conducted in SA. Benita van Wyk (Williams) Feedback Research & Analytics bvanwyk@feedbackra.co.za. Purpose of the Presentation.

Download Presentation

A review of Impact Evaluations Conducted in SA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A review of Impact Evaluations Conducted in SA Benita van Wyk (Williams) Feedback Research & Analytics bvanwyk@feedbackra.co.za

  2. Purpose of the Presentation • This presentation shares insights from a review of a convenience sample of so called “Impact Evaluations” commissioned by selected government departments / agencies in South Africa over the past 5 years. • The purpose is to explore the understanding of the concept “Impact Evaluation” as it is applied in the South African context. • This practical understanding of Impact evaluation as it is implemented on the ground is contrasted with various theoretical understandings of impact evaluation.

  3. Background

  4. Focus of the Study • Evaluations called “Impact Evaluation” or “Evaluation” which included also an “Impact” focus • Excluded specifically ex-ante “Social Impact Assessments” and “Environmental Impact Assessments” • Based on document review – TORs, Proposals, Evaluation Reports

  5. Definitions • “Impact evaluation is intended to determine more broadly whether the program had the desired effects on individuals, households, and institutions and whether those effects are attributable to the program intervention. Impact evaluations can also explore unintended consequences, whether positive or negative, on beneficiaries” (Baker, 2000)

  6. Definitions • As per the NONIE / DAC definition “impact” is: • “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended”. This definition broadens impact evaluation beyond direct effects to include the full range of impacts at all levels of the results chain. • Where do borders around Impact get drawn in reality?

  7. Barriers to IE use • Deemed to be expensive, • Time consuming, and technically complex, and • Findings can be politically sensitive, particularly if they are negative. • Difficult to design IE to ensure • Timeous answers • To the right questions • with sufficient analytical rigor. • Limited availability and quality of data

  8. Context • Government • The Government Wide Monitoring and Evaluation System (GWMES)is still focusing on roll-out of monitoring systems. • Lack of formal Government Wide Evaluation Policy, and no government policy on Impact Evaluation • Sensitization to Impact Evaluation since 2006 encouraging “thousand flowers blooming” • M&E Community • Have not made a public statement about its position on Impact Evaluation • NONIE statement was disseminated with some limited discussion • Donor Community • Interested in creating more demand for IE • ‘Stronger focus on “Outcome Evaluation” than “Impact Evaluation”’

  9. Sample • Convenience Sample • Government Tender Bulletins over past 5 years • Notice on SAMEA List Serve • Personal appeals to key informants • Snowball Methodology - referrals from initial respondents

  10. Challenges • Access to information – sensitive findings and careful public officials in an election year • Knowledge management - “Which Impact Evaluation?” • Availability of documents – reports, terms of reference • Self Screening based on insufficiently clear criteria regarding what constitutes an IE – “But this is not a real impact evaluation”

  11. Findings - Responses Number of Leads Types of Responses • Incomplete studies – TORs • Complete studies – Reports, presentations, summary reports

  12. Questions & Variables

  13. Findings- The Questions • Descriptive questions: Questions that focus on determining how many, what proportion etc. for the purposes of describing some aspect of the evaluation context. • Normative questions: Questions that compare outcomes of an intervention against a pre-existing standard or norm. • Analytic-Interpretive questions that builds our knowledge base: Questions that ask about the state of the debate issues important for decision making about specific policies. • Attributive questions: Questions that attempt to attribute outcomes directly to an intervention like a policy change or a programme Chelimsky, E. (2007). Factors Influencing the Choice of Methods in Federal Evaluation Practice. New Directions for Evaluation 113. p 13 - 33

  14. Findings – Variables under investigation • Impact of uncontrolled independent variables looking for various kinds of results • Independent Variables, less clarity • Impact of controlled independent variables looking for various kinds of results • Impact of HIV/AIDS on employment • Impact Evaluation of ECD, • Socio Economic Impact of Gambling • Dependent and Independent Variables clear • Child Support Grant on Nutrition • Public Awareness Campaign on audience knowledge, attitudes

  15. Timing • The timing interacts with the questions and the variables under investigation

  16. Design & Methods (Experimental, Quasi-Experimental, Mixed-Methods, Qualitative Methods, etc.)

  17. Designs and Methods: Examples • Whole range of Designs – Mixed methods, Regression, Quasi-Experimental • The Impact of Unconditional Cash Transfers on Nutrition: The South African Child Support Grant Jorge M. Aguero, Michael R. Carter, Ingrid Woolard • Rapid impact assessment of NMTT's work in Cape Town – Impact Consulting

  18. SA Child Support Grant Evaluation The SA Child Support Grant • In 1998 the Child Support Grant was implemented – a “no strings” grant paid to the “Primary Care Giver” (PCG) of a child - 98% women in the evaluation • Payable initially to children (under 7) in households with a monthly income of <R800 (urban) or <R1100 (rural) – later the income test was changed to include only income of PCG and his / her spouse. • Means test have not changed despite inflation of 40% 1998 - 2004 • Value of grant was R100 in 1998 and currently R180

  19. SA Child Support Grant Evaluation Evaluation Challenges • Single National Program – no purposefully randomized treatment and control group existed • No baseline data existed • Selection into treatment is not random, Dosage received is not uniform (delay in enrolling), so a binary treatment variable could not be used

  20. SA Child Support Grant Evaluation • Focused the evaluation on the impact of CSG on nutritional gain of children during their first 36 months or “window of nutritional vulnerability” • Operational Definitions • Treatment: • Check what outcomes are produced by different “dosages” of the grant using a Continuous Treatment Estimator for the window of 0 - 3 • Effect: • Height for Age z score – ex-post measure of the effect of 0 – 3 years window of nutritional vulnerability (Measure height twice, and took age from public health card • Control: • Developed a Standardized Eagerness measure (Did a child enrol quicker than peers in the same locality / age cohort or not) • Other covariates – age, education, sex, marital status and employment status

  21. SA Child Support Grant Evaluation • Findings • Targeted, unconditional CSG payments have bolstered early childhood nutrition as signalled by child height-for-age • Economical and statistical significant effects for large dosages of CSG support. • Effects are insignificant for children who received CSG support for less than 50% of the 36 month window • Even holds across local differences (e.g. in the supply of health related public goods) • Income and nutrition appear to be closely connected – maybe because it is assigned to women

  22. Rapid Impact Assessment • Rapid impact assessment of Nial Mellon Township Trust's work in Cape Town • Housing Project Evaluated using rapid appraisal methodology incorporating MSC • Income earning adults • Dignity • Security from crime • Grade 11s • Safety from fires – school equipment • Dignity • Primary care-givers • Psychological well-being • Health/hygiene – self and children • Dignity • Senior citizens • Psychological well-being • Health/hygiene • Safety and security

  23. Learning

  24. Learning • The kinds of learning supported by the conclusions and recommendations from impact evaluations • process learning, • organisational learning, • impact learning, • knowledge development and policy learning

  25. Intended Use • The intended use supported by impact evaluation - We refer to use as discussed by • Marra (2000), Patton (1997), Sandison (2006) and Weiss (1999)

  26. Use – Marra 2000 • Instrumental • Decision makers have clear goals, seek direct attainment of these goals and have access to relevant information • Enlightenment • Users base their decisions on a gradual accumulation and synthesis of information

  27. Use – (Sandison 2006) • Instrumental use • Direct implementation of findings and recommendations • Conceptual use • Evaluations influences through new ideas and concepts • Process use (learning) • Involves learning on the part of the people and management involved in the evaluation • Legitimising use • Corroborates a decision or understanding that the organisation already holds providing an independent reference • Ritual use • Where evaluations serve a purely symbolic purpose, representing a desirable organisational quality such as accountability • Mis-use • Involves the suppressing, subverting, misrepresenting or distorting of findings for political reasons or personal advantage • Non-use • Is where the evaluation is ignored because users find little or no value in the findings, are not aware, or the context has changed dramatically Sandison (2006)

  28. Use (Patton 1997) • Rendering judgements • Underpinned by accountability perspective (summative evaluation, accountability, audits, quality control, cost benefit decisions, decide a program’s future, accreditation/licensing) • Facilitating improvements • Underpinned by the developmental perspective (formative evaluation, identify strengths and weaknesses, continuous improvement; quality enhancement; being a learning organisation; manage more effectively; adapt a model locally) • Generating knowledge • Underpinned from the knowledge perspective of academic values (generalisations about effectiveness, extrapolate principles about what works; theory building; synthesize patterns across programs; scholarly publishing; policy making)

  29. Final Thoughts • Up take of IE • Definitions / Discourses around IE • Capacity for IE

More Related