520 likes | 643 Views
Partnerships, Alliances, and Coordination Techniques. Building Capacity to Evaluate Partnership Initiatives February 2008 Facilitated By: The National Child Care Information and Technical Assistance Center (NCCIC) NCCIC Is A Service of the Child Care Bureau. Presented by
E N D
Partnerships, Alliances, and Coordination Techniques Building Capacity to Evaluate Partnership Initiatives February 2008 Facilitated By: The National Child Care Information and Technical Assistance Center (NCCIC) NCCIC Is A Service of the Child Care Bureau Presented by The National Child Care Information Center
It is messy, but doable… and could be fun! This module will show you how to keep your head without losing your shirt! Evaluation of Partnership Initiatives
Session Objectives Participants will be able to: • Understand the basics of evaluation: approaches, data collection, and analysis. • Conduct an assessment of current capacity for evaluating partnerships initiatives • Determine the purpose and scope of their evaluation. • Understand the role of partners in making meaning of and communicating evaluation results.
PACT • PACT is an initiative of NCCIC, a service of the Child Care Bureau, U.S. Department of Health and Human Services • PACT gives State, Territory, and Tribal policymakers—particularly Child Care and Development Fund Administrators and their partners—the resources they need to build more comprehensive and collaborative early care and school-age programs for serving children and families
PACT Materials • PACT Collaborative Leadership Strategies: A Guide for Child Care Administrators and Their Partners • Web-based guide contains an introduction and six training modules: • Fundamentals of Collaborative Leadership • Creating, Implementing, and Sustaining Partnerships • Communication Strategies • Management Strategies for Successful Partnerships • Financing • Building Capacity to Evaluate Partnership Initiatives
Objective 1: The Basics of Evaluation …Getting Your Feet Wet!
Goals of Evaluation • Evaluation is a strategy to identify, monitor, and track progress of the implementation and expected outcomes of a collaborative project. • The evaluation plan serves as a guide for partners, staff, and others in both day-to-day activities and long range planning. It is critical to be clear on the purpose of the evaluation and to match approaches and measures to the purpose!
Benefits of Evaluation +++ On the plus side +++ A Good Evaluation …. • Sets clear targets and goals • Provides objective information • Assists in project management • Builds public awareness and support • Improves performance • Impacts outcomes • Increases funding Source: Child Care Partnership Project. (2000). Using results to improve the lives of children and families: A guide for public-private partnerships. Washington, DC: Child Care Bureau, Administration for Children and Families, U.S. Department of Health and Human Services.
Considerations and Cautions --- On the downside --- An Ineffective Evaluation…. • Sets demands for significant results too quickly • Makes unrealistic assumptions about what “caused” change • Makes it difficult to collect appropriate data given the current state of early childhood measurement tools • Causes unintended harm to children or families if results are used inappropriately • Results in a redirection, realignment, or removal of program activities Source: Child Care Partnership Project. (2000). Using results to improve the lives of children and families: A guide for public-private partnerships. Washington, DC: Child Care Bureau, Administration for Children and Families, U.S. Department of Health and Human Services.
The ABCs…of Evaluation Accountability Assessment Aggregate Beta Level Control Group The Language of Evaluation What Terms Confuse You? Source: Child Care & Early Education Research Connections. (n.d.). Research glossary. Retrieved March 25, 2008, from www.researchconnections.org/Discover?displayPage=resources\researchglossary.jsp
Why Work with Partners to Build Capacity for Evaluation? • Accountability is being required in many sectors • In Head Start • In Child Care • In Prekindergarten/Education • In Early Intervention • Multiple partners are increasingly working together to align initiatives and programs to increase access and effectiveness to early care and education services. • A number of States and communities are designing early childhood systems initiatives or developing cross-sector initiatives to meet the multiple needs of families and children, and provide more comprehensive services.
Objective 2: Building Capacity for Evaluation ……Is the Water Warm Enough?
Why is Evaluation Important to You/Your Collaborative Project? • What specific needs do you have that you would like the evaluation to address? • What are your goals? • What are each partner’s goals? • What do you think are the benefits? • To your organization? To children/families/practitioners? • What do you think are the challenges? • Are costs, capacity, resources available? • What are your fears about evaluation?
Considerations in Assessing Your Project’s Capacity for Evaluation • What progress do you expect? • What information will help you document gains? • What data is already available and what data is needed? • What capabilities do you have now? What do you need? • How much time will it take to get the system working well? • How much $$$$ will it require?
Six Key Strategies to Build Capacity for Evaluation • Establish a culture of accountability • Develop a long-range strategic plan • Partner with researchers and experts • Ensure data quality • Engage families & business/legislators • Communicate results simply and often
Assessing Your Project’s Capacity for Evaluation From your small group discussion on building capacity: • What surprised you? • What elements are your strengths? • What elements need to be addressed? • What next steps have you identified? Common Issues to Address in Building Capacity • Evaluation Expertise • Costs
Ensuring You Have Evaluation Expertise… Key Partners or an executive committee provide oversight to the evaluation team Options for Evaluator’s Role: • An outside evaluator (which may be an individual, research institute, or consulting firm) who serves as the team leader and is supported by in-house staff. • An in-house evaluator who serves as the team leader and is supported by program staff and an outside consultant. • An in-house evaluator who serves as the team leader and is supported by program staff. Source: Office of Planning, Research & Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. (2006). Chapter 2: What will evaluation cost? In Program manager’s guide to evaluation. Retrieved March 25, 2008, from www.acf.hhs.gov/programs/opre/other_resrch/pm_guide_eval/reports/pmguide/chapter_2_pmguide.html
Evaluation Cost Considerations Evaluation cost are driven by: • Evaluation design • The number of participants assessed • Standardized measures (number used, assessor training & reliability practices, frequency of assessment) • Data availability & quality (including automation of data entry & analyses) • Methods of reporting & communicating results • Infrastructure for data collection, level of analyses, printing, etc. Source: Golin, S., Mitchell, A., & Gault, B. (2004). The price of school readiness: A tool for estimating the cost of universal preschool in the states. Retrieved February 24, 2008, from www.iwpr.org/pdf/G713.pdf
From Assessing Capacity to Strategic Action You have conducted a baseline assessment of the current capacity for evaluation… You have considered costs and expertise needed… Now you are ready to: Develop a strategic plan for building capacity for evaluation
Objective 3: Choosing an Evaluation Approach …Wallowing in the Mud!
Considerations for Determining the Scope of Your Evaluation • What mandates or expectations for evaluation does your partnership project have? • What is the current status of your evaluation capacity, including resources for funding the evaluation? • What lessons learned/strengths of the partnership can be used in developing an evaluation approach? • What are the challenges or “sticky issues” that may impact the success of the evaluation? • What data do you have for a baseline and tracking outcomes over time, and across agencies?
Stages of Evaluation Approaches Research Do participants do better than non-participants? Is one programmatic approach more effective than another? Outcome Evaluation Does program achieve intended outcomes? For whom? Did organizational or system structure impact policy, resources, outcomes? Implementation Evaluation Did you implement the program as planned? If not, why not? What changes were made? Source: Oregon State University Family Policy Program & Oregon Child Care Research Partnership Project. (2000). Results accountability guidebook. Retrieved February 24, 2008, from, www.hhs.oregonstate.edu/familypolicy/occrp/publications/2000-Results-Accountability-Guidebook.pdf
Match Goal and Purpose to Evaluation Approach The fundamental principle-Evaluation approaches match the purpose and goals of partners and the initiative. • They can be as simple or complex as needed. • The following examples show the range of complexity and rigor that exist in the field of early care and education. What best meets your needs is up to you!
State Approaches to Evaluation Leading the Way to Quality Early Care and Education CD-ROM Literacy and Early Learning/Assessment and Evaluation: • Florida discusses evaluation of school readiness initiatives. • Ohio discusses the use of a Logic Model approach in evaluating an infant-toddler initiative. • California discusses their Desired Results Accountability System for child care and early education services.
California: Desired Results for Children and Families • Multi-purpose/multi-year state-level accountability system-to inform instruction, target technical assistance and monitor trends in publicly funded programs • Developmental observation profiles for children birth to age 14 to inform instruction • Family surveys and program self-assessments to target technical assistance • State level aggregated data to monitor trends • Conducted in partnership with a university and the training system Source: California Department of Education. (2007). Introduction to desired results. Retrieved March 25, 2008, from www.cde.ca.gov/sp/cd/ci/desiredresults.asp
Oklahoma’s Quality Rating System: Reaching for the Stars • A longitudinal study, with multiple phases and purposes, conducted by the Early Childhood Collaborative of Oklahoma and others • 1999 - observational study was conducted of implementation • 2001-2002 - validation study of centers • 2003 – outcome study to determine impact of tiered rates on quality and relative impact of specific indicators on overall quality • 2004– validation study of family child care homes Source: Norris, D., Dunn, L., & Dykstra, S. (2003). “Reaching for the stars” center validation study executive summary. Retrieved February 24, 2008, from www.ou.edu/ecco/Executive_Summary.pdf
Maryland’s Model of School Readiness Multi-purpose/multi-year state-level accountability system – to inform instruction, target technical assistance and, monitor trends in publicly funded programs • Each fall, all kindergarten teachers assess children using a modified version of the Work Sampling System and report this data to the Department of Education. • The Department of Education submits a report based on this and other data to the General Assembly each November about the level of school readiness Statewide. • The Department of Education, which includes child care, partners with a nonprofit to deliver and assess the training that supports this accountability effort. Source: Maryland State Department of Education. (n.d.). Maryland model for school references. Retrieved March 25, 2008, from www.mdk12.org/instruction/ensure/MMSR/index.html
Ohio Child Care/Head Start Partnership Project • This is a research study, funded by the Child Care Bureau, conducted in collaboration with State Policymakers • The goal of the partnership project is to provide high-quality, seamless services to families with low incomes and their children. • The longitudinal survey research is designed to examine the nature and benefit of partnerships, and the impact on outcomes for centers, teachers, and children.
Federal Context Local Community Early Care & Education Setting Family State Context A Systemic View of Child and Family Outcomes in Context
Assessment and Evaluation Lessons from Research and Professional Wisdom from the Field • Clips from Child Care Works: Research to Practice, Assessment and Evaluation Module • Involving stakeholders in program evaluation • Developing systems of assessment • Challenges of measuring quality
Determine the purpose and scope Agree on results Select measures Establish a baseline and objective Determine and implement strategies aimed at positive change Develop a performance agreement among groups responsible Collect data Analyze the data Assess progress and modify strategies and resources Publicize results 10 Steps to the Information You Need to Make Good Decisions (and convince others too!) Source: The Finance Project. (2002). Accountability systems: Improving results for young children. Retrieved February 24, 2008, from www.financeproject.org/Publications/accountability.pdf
What to Measure in a Partnership Project? • It is important to be clear— • Is increased collaboration a GOAL or an outcome in and of itself? • AND…OR • Is increased collaboration/resource sharing a STRATEGY to achieve goals? • AND…OR • Is effective administration of a project by multiple partners a CONDITION (theory of change) for success?
Short- & Intermediate-Term Objectives • The Core Services describe activities which are designed to meet short- and intermediate- term objectives on the way to meeting the long term goal • Tip/Challenge: As you identify program services, activities, and short and intermediate-term objectives, you must continually recheck and loop back to be sure that each element is aligned and reasonably links to the long term goal.
What Is a Theory of Change Logic Model? • It is a TOOL to develop a common understanding of • Goals • Vision of how program will effect change • Program Services • Outcomes • It serves as a dynamic process to guide program development, implementation, and evaluation/accountability.
How to Develop a Logic Model Gather key stakeholders’ perspectives on: • Long-term outcomes • Theory of change • Program services and activities • Short- & intermediate-term outcomes • Indicators/evidence of progress in meeting outcomes
Objective 4. Collecting Data and Reporting Findings…Making Mudpies!
Data Collection • Identify data currently being collected to determine the fit with indicators chosen. • Review the quality of the data and identify gaps in data needed to measure progress on the indicators. • Start small. It’s very easy, and pretty common, to go way overboard on data collection! It will keep you sane, and keep costs reasonable, if you choose a few data sources that have the intent and power to give you the information you need.
Multiple Levels of Data Collection • System Level Data - Data on key system or partnership indicators • Program /Service Level Data – Implementation data in the first stages and program outcome data in the second stage. • IndividualLevel Data – Data on adults, children, or families, often from a sample, and best collected over time, with multiple measures
Collect Powerful Data • Data Power • What are the most accurate and reliable data sources available? • Proxy Power • Are the indicators clearly within the control of the program and have shown, in previous research, to predict later gains? • Communication and Political Power • What outcomes are most important to key stakeholders? Source: Child Care Partnership Project. (2000). Using results to improve the lives of children and families: A guide for public-private partnerships. Washington, DC: Child Care Bureau, Administration for Children and Families, U.S. Department of Health and Human Services.
Measuring Outcomes in Early Care and Education • Not all measures to assess child outcomes have predictive ability to later outcomes, and may not be sensitive to young children’s dynamic growth or cultural and linguistic differences • Observational measures of program quality are not applicable to all settings, and may not capture adequately the nuances and complexity of quality. • Measures of partnership effectiveness, systemic impact, and system integration are sparse and difficult to adequately attribute causality/impact. • Choosing measures and methods to document outcomes…is a “fine art”– balancing what is available, appropriate, and useful!
Findings…Meaning…Action It is all too easy to collect data….but much harder to analyze the findings appropriately, make meaning of the findings, and use the findings to take (appropriate) action Source: Hebbeler, K. (2006, May). Now comes the fun part: Gleaning meaning from early childhood outcome data. Retrieved March 27, 2008, from www.fpg.unc.edu/~ECO/pdfs/Data%20Meeting%205-24-06.ppt
Findings • Findings are the numbers, the scores on measures, the summary of quarterly reports…which in and of themselves are meaningless! • While numbers are not debatable, it is important to include enough information about the numbers (and the context of the initiative) to make them meaningful “Data add substance to what could otherwise be dismissed as anecdotes, while stories add a personal element to cold numbers on a page” (Using Results to Improve the Lives of Children and Families, pg. 7) Hebbeler, 2006
Meaning • The interpretation put on the numbers • Is this finding good news? Bad news? News we can’t interpret? • Meaning is debatable and reasonable people can reach different conclusions from the same set of numbers • Stakeholder involvement can be helpful in making sense of findings Meaning is derived from the goals and your theory of change (why you believe you can achieve results). Hebbeler, 2006
Reporting Results: Tell the Story • Identify areas where changes may be needed for future implementation. • Inform policy and/or funding decisions by telling the "story" of program implementation and demonstrate the impact of the program on participants. • Build public awareness and support with legislators, parents, and community members. • Choose a report format that is consistent with your program purpose and appeals to the target audience.
Take Powerful Action A key role of the partnership team is communicating results and determining how the evaluation results are used • To improve program • To get more funding • To build public awareness • To plan next steps in the evaluation approach
In Summary: Building Capacity for Evaluation • You have expertise and resources available to assist you • You can take a thoughtful, planned approach to getting the information and data you need • You, and your partners, play a key role is determining the purpose, gathering appropriate resources, providing oversight, and ensuring information is meaningful and useful
Closing Personal learning plan Quality improvement Session evaluation