140 likes | 147 Views
This research paper explores the evaluability of multi-site community-based service integration initiatives, examining process and outcome evaluation questions, intervention domains, and summative findings.
E N D
Evaluability of Multi-Site Community-Based Service Integration Initiatives Janet S. Reed, PhD, MHA William R. Holcomb, PhD, MBA Presented at the Canadian Evaluation Society Conference June 3, 2003 Vancouver, BC, Canada
Multi-Site Program • Implemented over the previous 9 years • 21 communities in 24 counties (of 115) • 18 program goals • Public and private stakeholders • 2 previous evaluators • Change in leadership • Three – year evaluation period
Process Evaluation Questions • Documentation of service delivery and citizen participation • Involvement of local residents in planning and development • Responsiveness to the needs of consumers • # of citizens attending community meetings and their roles • # served by type of service, frequency, and demographics • Changes resulting from their activities • Openness of financial dealings (transparency) • Range of resources spent on overhead, direct services, and coordination activities • How have communities leveraged other funding sources to expand local services? • Benefits of program attributed by community leaders • Methods of increasing cooperation among community agencies
Outcome Evaluation Questions • System changes as a result of program activities • Independent impact of : • educational enrichment activities on student academic achievement • literacy programs on student academic achievement • school-linked mental health and counseling programs on discipline • childcare services on family income and workforce participation • Consumer satisfaction with services • Satisfaction of stakeholders with initiative
Multi-Site Evaluation • Process indicators • Outcomes relating to 18 goals • Interventions • Community (Health and employment training and assistance, community libraries, etc.) • School(Educational enrichment, after-school activities, formal tutoring, extracurricular activities, pregnancy and smoking prevention, etc.) • Family (Parenting, counseling, support, etc.) • Individual (Tutoring, counseling, mentoring, behavioral aid, etc.) • Control for demographic variables
Formative v. Summative • Logic model • Fidelity • Target populations • Empirical evidence for interventions • Data collection
Cluster Evaluation • Exploratory • Program driven • Extensive variation • Planned to retrospective evaluation • Role is formative • Model is collaborative
Integrated Intervention Inventory • Structured telephone interview • 101 direct intervention staff • For each intervention • Items • Interventions • Process data • Sustainability of interventions • Indices measured • Data for outcome evaluation
Formative Evaluation Findings • 21 communities • 24 counties • 112 intervention locations • 101 direct intervention staff • 1,141 interventions • 22,343 persons served • For 760 of the interventions (66.6 %), staff knew whom they had served • 474 (41.5%) reported knowing how much people participated • 258 (22.6 %) reported having satisfaction data • 316 (27.7 %) systematically collected outcome data
Top Ten Interventions • Education • Extracurricular Activities • Family Activities • Counseling • Tutoring • Recreation • Prevention • Training • Summer Activities • Reinforcement
Summative Findings • Of the 1,141 interventions: • 4 collected academic achievement data for educational enrichment activities; 2 showed positive, significant impact • 3 collected data on academic achievement data for literacy (tutoring) programs; all 3 showed positive, significant impact • 1 collected data on disciplinary actions for those participating in mental health programs, which showed positive, significant impact • None collected data on the impact of childcare on family income and workforce participation
Conclusions • It is important for evaluators to understand the complexity of multi-site programs and to realistically assess the role of a program evaluation for informing decision-makers. • It is critical for policy makers, program planners, and program managers to have clear and consistent expectations of the goals and anticipated impact of programs; to communicate this clearly; and to ensure that processes and outcomes are measured in such a way as to ensure accountability of publicly-funded initiatives. • Finally, “…a higher standard of proof for the value of a collaborative initiative should not be required than for existing mainstream programs or state initiatives.” (Bruner, 1993)