390 likes | 982 Views
Overview of Evaluation Designs. Learning objectives. By the end of this presentation, you will be able to: Explain evaluation design Describe the differences between types of evaluation designs Identify the key elements of each type of evaluation design
E N D
Learning objectives By the end of this presentation, you will be able to: • Explain evaluation design • Describe the differences between types of evaluation designs • Identify the key elements of each type of evaluation design • Understand the key considerations in selecting a design for conducting an evaluation of your AmeriCorps program
Overview of presentation • What is evaluation design? • CNCS’s evaluation continuum • How to select an appropriate evaluation design for your program • Key elements of each type of evaluation design • Evaluation resources and tools
What is evaluation? • Evaluation is the use of research methods to assess a program’s design, implementation, outcomes, or impacts. • Evaluation looks at the results of your investment of time, expertise, resources, and energy, and compares those results with what you said you wanted to achieve in your program’s logic model.
Building Evidence of Effectiveness Evidence Based Attain strong evidence of positive program outcomes Obtain evidence of positive program outcomes Evidence Informed Assess program’s outcomes Ensure effective implementation Evaluation Identify a strong program design Evaluation Gather evidence Develop logic model
What is evaluation design? • Evaluation design is the structure that provides information to answer questions you have about your program. Evaluation design means thinking about: • Why conduct an evaluation • What to measure • Who to include in the evaluation (e.g. all beneficiaries or a sample) • When and how often data will be collected • What methods will be used to collect data • Whether comparison with another group is appropriate and feasible • The evaluation design you choose depends on what kinds of questions your evaluation is meant to answer.
Key considerations in selecting a design The appropriate design will largely depend upon: • Your program model • The primary purpose or goal of the evaluation • The specific question(s) the evaluation will address • Resources available for the evaluation • Funder evaluation requirements
Program logic model • A program logic model is a detailed visual representation of a program and itstheory of change. • It communicates how a program works by depicting the intended relationships among program components.
Example logic model for a literacy program For an overview of logic models, CNCS grantees can refer to the module, “How to Develop a Program Logic Model” located on the Knowledge Network.
Define purpose and scope Each evaluation should have a primary purpose around which it can be designed and planned. • Why is the evaluation being done? • What do you want to learn? • How will the results be used? By whom?
Resource considerations Consider what resources are available to carry out the evaluation: • Staff time • Funding • Outside evaluation expertise It is not necessary to evaluate your entire program. • Evaluation can be narrow or broad depending on questions to be answered • Evaluation is not a one-time activity but a series of activities over time that align with the life cycle of your program
Basic types of evaluation designs The two “sides” of a program’s logic model align with the two types of evaluation designs: Process and Outcome.
Process evaluation Goals: • Documents what the program is doing • Documents to what extent and how consistently the program has been implemented as intended • Informs changes or improvements in the program’s operations Common features: • Does not require a comparison group • Includes qualitative and quantitative data collection • Does not require advanced statistical methods
Examples of process evaluation questions • Is the program being implemented as designed or planned? • Is the program being implemented the same way at each site? • Is the program reaching the intended target population with the appropriate services at the planned rate and "dosage"? • Are there any components of the program that are not working well? Why or why not? • Are program beneficiaries generally satisfied with the program? Why or why not? • Are the resources adequate for the successful implementation of the program?
Examples of methods and data collection tools for process evaluation Data Sources: • Program and school level administrative data • Site visits to the schools to examine the fidelity of program implementation • Observations of literacy intervention with individual students • Interviews with school staff and administration • Focus groups with teachers and students Analysis: • Thematic identification • Confirmation of findings across sources
Group exercise #1: Designing a process evaluation for a literacy program Research question: • Is the literacy program being implemented consistent with the program’s logic model and theory of change? Design considerations: • What to measure • Who to include in the evaluation • When and how often data will be collected • What methods will be used to collect data
Group exercise #1: Designing a process evaluation for a literacy program
Example crosswalk for a process evaluation of a literacy program
Optional exercise #1: Designing a process evaluation for a literacy program
Example crosswalk for a process evaluation of a literacy program
Outcome evaluation Goals: • Identifies the results or effects of a program • Measures program beneficiaries' changes in knowledge, attitude(s), and/or behavior(s) that result from a program Common Features: • May include a comparison group (impact evaluation) • Typically requires quantitative data • Often requires advanced statistical methods
What is a comparison or control group? • A group of individuals not participating in the program or receiving the intervention • Necessary to determine if the program, rather than some other factor, is causing observed changes • “Comparison group” is associated with a quasi-experimental design and “control group” is associated with an experimental design
Outcome evaluation questions • Are there differences in outcomes for program beneficiaries compared to those not in the program? • Did all types of program beneficiaries benefit from the program or only specific subgroups? • Did the program change beneficiaries’ knowledge, attitude, behavior, or condition?
Outcome evaluation designs • Non-Experimental design • Single group post design • Single group pre-post design • Retrospective study designs • Quasi-Experimental design • Experimental design (Randomized Controlled Trial)
Less rigorous outcome evaluation designs • Single group post design • Examines program beneficiaries after they receive program services • Single group pre-post design • Provides a comparison of program beneficiaries before and after they receive program services • Retrospective study designs • Ask previous program beneficiaries to provide opinion on the effects of the program services they received • Member surveys • Survey members on their program experiences and opinions of the results of their service
Quasi-Experimental and Experimental Designs Quasi-experimental design (QED) • Form comparison group from a similar population of program participants (e.g., similar participants from another program, extra applicants, etc.) Experimental design (Randomized Controlled Trial- RCT) • Randomly assign new eligible applicants to either receive intervention/program or alternative/delayed services (control group)
Group exercise #2: Designing an outcome evaluation of a literacy program Research question: • What impact does the literacy intervention program have on student reading levels relative to a comparison group of students? Design considerations: • What to measure • Who to include in the evaluation • When and how often data will be collected • What methods will be used to collect data
Group exercise #2: Designing an outcome evaluation for a literacy program
Example crosswalk for an outcome evaluation of a literacy program
Optional exercise #2: Designing an outcome evaluation for a literacy program
Example crosswalk for an outcome evaluation of a literacy program
Evaluation designs and CNCS’s requirements *Fulfills CNCS evaluation design requirement for large, recompete grantees if a reasonable comparison group is identified and appropriate matching/propensity scoring is used in the analysis.
Resources • CNCS’s Knowledge Network • https://www.nationalserviceresources.gov/evaluation-americorps • The American Evaluation Association • http://www.eval.org • The Evaluation Center • http://www.wmich.edu/evalctr/ • Innovation Network’s Point K Learning Center • http://www.innonet.org • Digital Resources for Evaluators • http://www.resources4evaluators.info/CommunitiesOfEvaluators.html