580 likes | 759 Views
Patterns in Child Outcomes Summary Data:. Analytic Approaches and Early Findings from the ENHANCE Project. Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011. Measuring and Improving Child and Family Outcomes Conference New Orleans, LA. Today’s session .
E N D
Patterns in Child Outcomes Summary Data: Analytic Approaches and Early Findings from the ENHANCE Project Cornelia Taylor, Lauren Barton, Donna Spiker September 19-21, 2011 Measuring and Improving Child and Family Outcomes Conference New Orleans, LA
Today’s session • Provide a brief update about ENHANCE • Identify the purpose and approach of the state data study • Describe some preliminary findings from initial states involved in the state data study • Explain how other states could examine their own data in the same way as that presented • Discuss any emerging implications for validity of the COS and for interpreting individual state data
Origins of ENHANCE Need for Outcomes Data – Challenging to Collect
Origins of ENHANCE COS Process Implemented > 40 States, Little Systematic Validation for Use in Accountability Need for Outcomes Data – Challenging to Collect
Origins of ENHANCE COS Process ? Implemented > 40 States, Little Systematic Validation for Use in Accountability Investigate… Learn Need for Outcomes Data – Challenging to Collect
Early Evidence Belief in potential for COS process to be valid based on: • Existing literature: team-based decision-making can be reliable and valid • Existing literature: teams are effective in identifying individual children’s functioning so that they can plan and deliver appropriate services • Early data from states: pilot sites, small n’s showing similarity in distributions, sensible patterns for subgroups • Anecdotal data from trainers: participants reach decisions fairly easily and consistently
ENHANCE Project launched by the Early Childhood Outcomes Center (ECO) and SRI International Funded by the U.S. Dept. of Education, Institute for Educational Sciences – July 1, 2009 Series of studies designed to find out: the conditions under which the Child Outcomes Summary (COS) Process produces meaningful and useful data for accountability and program improvement the positive and/or negative impact of the COS process on programs and staff what revisions to the form and/or the process are needed
Four ENHANCE Studies • Comparison with Child Assessments • Team Decision-Making • Provider Survey • State Data Study
Studies 1-3:34 Project Data Collection Sites 17 Part C (Birth to 3) • Illinois • Maine • Minnesota • New Mexico • Texas • North Carolina 17 Part B Preschool (3-5) • Illinois • Maine • Minnesota • New Mexico • Texas • South Carolina
Comparison with Child Assessments Study Goals • Compare COS ratings to BDI-2, Vineland-II scores • Program Entry • Program Exit • Compare conclusions from COS and assessments Sample • 108 children - birth to 3 • 108 children - 3 – 5 years Study Status • Recruiting families • About ½ of the sample enrolled • See expected variability in sample (ages, disability types) and initial COS ratings/assessment scores
Team Decision-Making Study Goals • Learn more about the implementation of the COS process, including how the team reaches a decision about a rating and what is discussed. • Do COS ratings assigned match the developmental level of the behaviors presented in the meeting? • What is team understanding of outcomes and rating criteria? Sample • 180 children each from Part C & Part B 619 ½ entry & ½ exit meetings Study Status • Starting data collection now in about ½ the sites • 19 videos received • Expect to start coding videos Summer 2012
Provider Survey Goals • What processes are being used to determine COS ratings? • What is the impact of the COS process on practice? • What have providers learned about the COS? • What else would be helpful? Sample • All providers in the program who participate in the COS process are invited to participate Study Status • Developing survey content • Survey expected Spring 2012
State Data Study Goals • Analyze characteristics of COS data and relationships to other variables • Look for consistency in patterns across states Examples of Questions • Are patterns in COS data across states consistent with those predicted for high quality data? • How are COS ratings related to hypothesized variables (e.g., disability type) and not to other variables (e.g., gender)? • How are team variables related to COS ratings? Sample • All valid COS data within the state for a reporting year • 15-18 states conducting all analyses • Additional states sharing select analyses
State Data Study: Status • Refined procedures for gathering data tables by gathering data from a preliminary group of 6 states • Mostly states used procedures and generated data tables • A few provided formatted data files for SRI to analyze • Beginning to analyze data from that preliminary group • Soon will request data from other states in state data study and permission to use relevant data additional states have already analyzed and shared
State Data Study:Preliminary Data from 5 States 3 Part C (Birth to 3) 3 Part B Preschool (3-5)
How would these data analyses be conducted? • States would send data to SRI annually • de-identified data files OR • aggregate output or reports from a set of requested analyses • Examples of analyses include • the distributions of entry and exit COSF scores • relationships between outcomes • relationships between outcomes across time • relationships of outcome scores to other factors such as disability and gender
What data would I need to submit? • Data collected at entry and exit from Part C and Part B 619 programs • COSF ratings • Additional child descriptors (e.g. race, gender, primary disability) • Variables that describe the setting or composition of the services
How will I submit data? • De-identified data files • Templates developed in MS Excel • Submitted through a secure server • Analyzed data • Table shells developed in MS Word and MS Excel • Submitted through secure server or emailed
Who do I contact for more information? Cornelia Taylor cornelia.taylor@sri.com (650) 859-3092
Entry Rating Expectations • What should entry ratings look like? • Should they differ across outcomes? • Where do most of the ratings fall? • How much should the extremes of the scale be used ( 1 or 7)?
Entry Data Analysis • The following data are from 3 Part C programs and 2 Part B programs • All data are from 08 – 09 • The data are entry cohorts • i.e. all children who entered during the FFY
Part C 08-09 entry ratings across states; Outcome A 1 2 3 4 5 6 7
Part C 08-09 entry ratings across states; Outcome B 1 2 3 4 5 6 7
Part C 08-09 entry ratings across states; Outcome C 1 2 3 4 5 6 7
Things to notice • The difference in distributions between Part C and Part B are largest for Outcome C • Children in Part B enter with higher ratings
Things to Notice • Variations in patterns across outcomes
Conclusions Across Part C and Part B • More that ½ of all children enter with a COS rating of 3,4 or 5 across outcomes. • An average of 12% of children enter at with the very lowest (1) or the very highest (7) across outcomes. • The typical entry distribution has most children towards the middle of the distribution.
Pattern Check: if the distribution of entry scores in your state seems to be heavily weighted towards one end or the other of the distribution. No Action Interpretation: You may be serving a population that is higher or lower functioning that other states. Action Interpretation: Your providers may be systematically misunderstanding the definition of COS rating points.
Additional Entry Analysis • Correlations between entry ratings • Cross tabs of entry ratings by: • Program • Primary disability • Race/ethnicity
Part C 08-09 exit ratings across states; Outcome A 1 2 3 4 5 6 7
Part C 08-09 exit ratings across states; Outcome B 1 2 3 4 5 6 7
Part C 08-09 exit ratings across states; Outcome C 1 2 3 4 5 6 7
Part B 08-09 exit ratings across states; Outcome A 1 2 3 4 5 6 7
Part B 08-09 exit ratings across states; Outcome B 1 2 3 4 5 6 7
Part B 08-09 exit ratings across states; Outcome C 1 2 3 4 5 6 7
Part C 08-09 average exit scores across outcomes (state n = 3)
Part B 08-09 average exit scores across outcomes (state n=3)
Things to Notice • Variation in ratings across outcomes • The exit distribution is shifted toward a higher rating than is the entry distribution • For Part B, the average percent of children with a rating of 7 is much higher for Outcome C than for the other two outcomes
Pattern Check: the distribution of exit scores in your state is not skewed towards the higher end of the rating scale. No Action Interpretation: You may be serving a lower functioning group than other states • If this interpretation is true, it should also be apparent in your entry distribution Action Interpretation: The children in your programs may not be making expected gains.
Entry-Exit Paired Distribution • Choosing a metric for looking at paired distributions • Progress categories • Side-by-side entry exit comparisons • Both of the above can be completed using the COS calculator 2.0