300 likes | 436 Views
How to Plan a Local Evaluation and Lessons Learned. Philip Rodgers, Ph.D. Evaluation Scientist American Foundation for Suicide Prevention 2013 Garrett Lee Smith Combined Annual Grantee Meeting June 11-13, 2013, Washington DC. Acknowledgements. U.S. Department of Health & Human Services.
E N D
How to Plan a Local Evaluation and Lessons Learned Philip Rodgers, Ph.D. Evaluation Scientist American Foundation for Suicide Prevention 2013 Garrett Lee Smith Combined Annual Grantee Meeting June 11-13, 2013, Washington DC
Acknowledgements U.S. Department of Health & Human Services Substance Abuse and Mental Health Services Agency Howard Sudak, MD AFSP Katrina Bledsoe, PhD SPRC
Why is evaluation important? • Required by funding agencies. • Improves performance. • Demonstrates effectiveness. • Advances knowledge. As not everything can be done, there must be a basis for deciding which things are worth doing. Enter evaluation. M. Q. Patton Source for Patton Quote: U.S. Department of Health and Human Services, P. H. S. (2001). National Strategy for Suicide Prevention: Goals and Objectives for Action. Rockville, MD: U.S. Department of Health and Human Services, Public Health Service.
Participatory evaluation Formative evaluation Summative evaluation Responsive evaluation Goal-free evaluation Empowerment evaluation Advisory evaluation Accreditation evaluation Adversary evaluation Utilization evaluation Consumer evaluation Theory-driven evaluation What are types of evaluation? We will address: Process, Outcome, and Impact Evaluations
What is process evaluation? • Process “evaluation assesses the extent to which a program is operating as it was intended.” Source: General Accounting Office. (2005). Performance Measurement and Evaluation: Definitions and Relationships (Vol. GAO-05-739SP). Washington DC: United States Government Accounting Office.
What is outcome evaluation? • Outcome “evaluation assesses the extent to which a program achieves its outcome-oriented objectives.” Source: General Accounting Office. (2005). Performance Measurement and Evaluation: Definitions and Relationships (Vol. GAO-05-739SP). Washington DC: United States Government Accounting Office.
What is impact evaluation? • “Impact evaluation…assesses the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program.” Source: General Accounting Office. (2005). Performance Measurement and Evaluation: Definitions and Relationships (Vol. GAO-05-739SP). Washington DC: United States Government Accounting Office.
Logic models can drive evaluations Generic Gatekeeper Training Logic Model Process Evaluation Outcome Evaluation Impact Evaluation vs. Control Group
Where do evaluation questions come from? • Generally, from the goals listed in a logic model. • More specifically, defined by stakeholders through a collaborative process. • Depending upon circumstances, stakeholders can be funders, participants, trainers, evaluators, and others, or a combination of these.
Divergent, convergent process… • Stakeholders meet with relevant materials (grant application, logic model, etc.). • After a review of materials, engage in divergent process—a free association of evaluation questions • After the divergent process, stakeholders collectively narrow list of questions to manageable proportions through a convergent process.
What is measurement? • Measurement is the means you use to collect data. • It includes how you collect data and what data you collect (and how well you collect data).
How will you collect data? • Questionnaires (in-person, mail, email, phone) • Psychological Tests • Interviews • Health Records • Health Statistics • Observations • Logs
Where do you find measures? • Create your own (pilot test!) • Borrow from other evaluations (with permission!) • Search the literature (see Additional Resources) • Use standardized measures (may cost) • Brown (adult) and Goldston (adolescent) reviews • Use existing data sources and records
What evaluation design do you need? • What is your purpose? • Performance assessment? • Evidence of concept? • Evidence of effectiveness?
There are four basic evaluation designs Posttest Only X O Pre- and Posttest O X O Posttest Only w/control X O O Pre- and Posttest w/control O X O O O
Additional data collection points can be added to the basic desgins Posttest Only X O O Pre- and Posttest O X O O Posttest Only w/control X O O O O Pre- and Posttest w/control O X O O O O O
Best evidence comes when subjects are randomly assigned to groups x Exp. Group Exp. Group Pool of Subjects or Groups Con. Group Con. Group Random assignment increases the likelihood that subjects in both groups are equivalent in regards to factors that may be related to outcomes.
If random assignment is not possible… • Compare groups that are similar. • Use a pretest so that group differences—to some extent—are accounted for.
Philip Rodgers, PhD American Foundation for Suicide Prevention prodgers@afsp.org 802-446-2446