380 likes | 547 Views
Evaluation Basics. Anita Singh, PhD Family Programs Evaluation Branch Chief Office of Research and Analysis Food and Nutrition Service, USDA. Why Evaluate?. To obtain ongoing, systematic information about a project Project management (includes project refinement and planning)
E N D
Evaluation Basics Anita Singh, PhD Family Programs Evaluation Branch Chief Office of Research and Analysis Food and Nutrition Service, USDA
Why Evaluate? • To obtain ongoing, systematic information about a project • Project management (includes project refinement and planning) • Project efficiency • Project accountability
Types of Evaluation • Formative • Process • Outcome • Impact
Formative Research • Typically occurs when an intervention is being developed. • Results used in designing intervention • Results are informative – not definitive • Examples – focus groups, literature review etc.
Process Evaluation • Tracking the actual implementation (e.g. delivery, resources) • Used to determine if intervention was delivered as designed • Helps identify barriers to implementation and strategies to overcome barriers
Outcome Evaluation • Addresses whether anticipated changes occurred in conjunction with the intervention • Example: Pre- Post intervention test of nutrition knowledge • Indicates the degree of change but it is not conclusive evidence
Impact Evaluation • Allows one to conclude authoritatively that the observed outcomes are due to the intervention • Can draw cause and effect conclusions by isolating the intervention from other factors that might contribute to the outcome.
Planning for an Impact Evaluation IS THE INTERVENTION EVALUABLE? • What are the objectives? • What is the expected size of the impact? • Why, how and when is the intervention expected to achieve the objectives? • Will the intervention be implemented as intended?
Planning for an Impact Evaluation • Build on available research • Consider study design – use of “experimental design” versus “quasi-experimental design” – cost and resource considerations
Design considerations • Experimental – Strongest type of design -- cause and effect, uses random assignment; cost considerations • Quasi-experimental designs – does not use random assignment; can have a control group – may include multiple groups and or/multiple waves of data collection Other: Program Evaluations – observational studies/ surveillance data
Planning for an Impact Evaluation • Use “SMART” objectives • Choose measures that fit the intervention • Protection of human subjects
As the Intervention Begins • Collect impact data at start-up before intervention has reached steady state; Follow-up (interest/resources) After the intervention Report the findings Use the findings
What is Social Marketing? It is a: • Systematic and strategic planning process. • Social or behavior change strategy. • Total package of strategies carefully chosen based on characteristics of the target audience. • Uses strategies from commercial marketing.
Social Marketing is Not: • Just advertising or communication • A media campaign • Reaching everyone • A fast process • A theory
Basic Principles of Social Marketing • Behavior change • Audience orientation • Audience segmentation • Exchange • Competition • Marketing mix (4 P’s)
Why Evaluate Social Marketing? • Support continuing improvement • Establish effect and inform program accountability
Challenges for Evaluation • Assess extent of exposure • Measure intermediate outcomes • Less intense – design and measurement tools need to be sensitive to small changes in behavior • Timely feedback to inform continual improvements Source: Hersey et al, 1999
Steps for Program Evaluation • Engage stakeholders • Describe the Program (e.g. develop a Logic Model; develop a conceptual framework) • The next 2 slides present an example of developing a logic model (Source: UW Extension)
Simplest form of logic model INPUTS OUTPUTS OUTCOMES Source: University of Wisconsin-Extension, Program Development and Evaluation
A bit more detail INPUTS OUTPUTS OUTCOMES Activities Participation Short Medium Long-term Program investments What we invest What we do Who we reach What results SO WHAT?? What is the VALUE? Source: University of Wisconsin-Extension, Program Development and Evaluation
Steps for Program Evaluation • Identify evaluation questions • Formative testing • Develop data collection and analysis plan – select appropriate measures • Analyze and interpret the data • Use and disseminate the findings
Social Marketing Planning Process • A structured approach to developing and implementing a program or intervention for voluntary behavior change. • Six Phases 1. Problem description.2. Formative research.3. Strategy development.4. Intervention design. 5. Evaluation.6. Implementation. (Source: CDC’s Social Marketing for Nutrition and Physical Activity Online Course)
Evaluating the Impact of Social Marketing • Challenges in evaluating behavioral impact • Incorrectly ascribing impact – contamination issues • Use of inappropriate measures of change e.g. focusing on general long-term changes versus intermediate changes and specific behaviors
Measurement Selection Includes • Knowing the information needs • Understanding Campaign rationale • Basing on Theoretical model for behavior change • Selecting approach e.g. mail survey, phone, in-person interview, records – reliability, response rate, cost • Selecting Measurement tools – validity and reliability of instruments
Study Design • Use of Comparison sites • Eliminates alternative explanations that could otherwise account for observed results • Internal validity
Study Design • Timing of data collection • Pre or baseline • After implementation • Post Campaign
Study Design – sample size • Statistical Power –based on amount of change that could be expected • Once desired magnitude of change has been established, then select/calculate sample size with statistical power to determine if the change is due to the intervention and not random chance (see Hersey et. al.)
Sample Size Determination – Which one would require a larger sample?
Sample Size depends on: • Difference that is expected to be detected • Measurement tool • Study design – cross-sectional versus longitudinal study
Other Considerations • Response rate – higher the response rate, the greater the likelihood that the sample is representative of the study population. • Example survey – 30 percent completed versus 80 percent completed.
Other Considerations • Low response rate – deal with issues such as “intention to treat.” • Intention to treat analyses are done to avoid the effects of crossover and drop-out, which may break the randomization to the treatment groups in a study. • Intention to treat analysis provides information about the potential effects of treatment policy rather than on the potential effects of specific treatment.
Other considerations • Selection bias: the sample is not truly representative of the study population • Repeated interviews • Sample attrition • If high attrition rate - comparing pre/baseline scores of non-dropout with dropouts. • May need to adjust for difference
Other Considerations • Seasonal effects – Fresh fruits and vegetable consumption
Summary • Evaluation can provide valuable, ongoing systematic information about a project • Common evaluation features across delivery types • Choice of features and evaluation type(s) will be driven by your information needs
Social Marketing Resources CDC on line course • http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/basics/index.htm (basics) • http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/phase5/index.htm (evaluation) Evaluating Social Marketing in Nutrition: A Resource Manual by Hersey et. al. http://www.fns.usda.gov/oane/MENU/Published/nutritioneducation/Files/evalman-2.PDF
Evaluation Online Resources • Nutrition Education: Principles of Sound Impact Evaluation, FNS, Sept. 05 http://www.fns.usda.gov/oane/menu/Published/NutritionEducation/Files/EvaluationPrinciples.pdf • Building capacity in Evaluating Outcomes –UW Extension, Oct 08 http://www.uwex.edu/ces/pdande/evaluation/bceo/index.html
Resources continued • WK Kellogg Foundation Evaluation Handbook, Jan 98 http://www.wkkf.org/default.aspx?tabid=75&CID=281&NID=61&LanguageID=0 • Developing a logic model: Teaching and training guide: E. Taylor-Powell and E. Henert; UW Extension Feb 08 http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html