610 likes | 621 Views
This presentation explores questions to establish the validity of accountability systems and provides state examples for analyzing outcome data. It discusses methods to gather, interpret, and use evidence for improvement.
E N D
Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems Dale Walker, Sara Gould, Charles Greenwood and Tina Yang University of Kansas, Early Childhood Outcome Center (ECO) Marguerite Hornback, Kansas Leadership Project, 619 Liaison Marybeth Wells, Idaho 619 Coordinator Early Childhood Outcomes Center
Acknowledgement: Thanks are due to our Kansas colleagues who assisted with the development, administration and analysis of the COSF Survey and team process videos, and to the Kansas Part C and Kansas and Idaho Part B professionals who participated in the COSF process. Appreciation is also extended to our ECO and Kansas colleagues for always posing the next question.. Early Childhood Outcomes Center
Purpose of this Presentation • Explore a range of questions to assist states in establishing the validity of their accountability systems • Illustrate with state examples how outcome data may be analyzed • Discuss ways to gather, interpret, and use evidence to improve accountability systems • Information synthesized from Guidance Document on Child Outcomes Validation to be distributed soon! Early Childhood Outcomes Center
Validity of an Accountability System • An accountability system is valid when evidence is strong enough to conclude: • The system is accomplishing what it was intended to accomplish and not leading to unintended results • System components are working together toward accomplishing the purpose Early Childhood Outcomes Center
What is Required to Validate our Accountability Systems? • Validity requires answering a number of logical questions demonstrating that the parts of the system are working as planned • Validity is improved by ensuring the quality and integrity of parts of the system • Validity requires continued monitoring, maintenance and improvement Early Childhood Outcomes Center
Some Important Questions for Establishing the Validity of an Accountability System • Is fidelity of implementation of measures high? • Are measures sensitive to individual child differences and characteristics? • Are the outcomes related to measures? • What are the differences between entry and exit data? • Are outcomes sensitive to change over time? • Are those participating in the process adequately trained? Early Childhood Outcomes Center
What Methods can be used to Assess System Fidelity? • COSF ratings and rating process, (including types of evidence used, e.g., parent input) • Team characteristics of those determining ratings • Meeting characteristics or format • Child characteristics • Demographics of programs or regions • Decision-making processes • Training information • Comparing ratings over time Early Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Outcomes Data: Video Analysis • Video observation • 55 volunteer teams in KS submitted team meeting videos and matching COSF forms for review • Tried to be representative of the state • Videos coded • Team characteristics • Meeting characteristics • Evidence used • Tools used (e.g., ECO decision tree) Early Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Data Using Surveys • Staff surveys • Presented and completed online using Survey Monkey • 279 were completed • Analyzed by research partners • May be summarized using Survey Monkey or other online data system Early Childhood Outcomes Center
Fidelity: Analysis of Process to Collect Data Using State Databases • Kansas provided Part C and Part B data • Idaho provided Part B data • Included: COSF ratings, OSEP categories, child characteristics Early Childhood Outcomes Center
Fidelity: Types of Evidence Used in COSF Rating Meetings (videos only) • Child Strengths (67-73% across outcome ratings) • Child Areas to Improve (64-80%) • Observations by professionals (51-73%) Early Childhood Outcomes Center
Fidelity: Types of Evidence Used in COSF Rating Meetings (videos and surveys) • Assessment tools • Video- 55% used for all 3 ratings • Survey- 53% used one of Kansas’ most common assessments • Parent Input incorporated • Video- 47% • Survey- 76% • 39% contribute prior to meeting • 9% rate separately • 22% attend CSOF rating meeting Early Childhood Outcomes Center
Fidelity: How can we interpret this information? • Assessment use • About half are consistently using a formal set of questions to assess child functioning • Parent involvement • Know how much to emphasize in training • Help teams problem-solve to improve parent involvement Early Childhood Outcomes Center
Fidelity: Connection between COSF and Discussion (Video) • 67% documented assessment information but did not discuss results during meetings • 44% discussed observations during meetings but did not document in paperwork Early Childhood Outcomes Center
How information about the Process has informed QA activities • Used to improve quality of the process • Refine the web-based application fields • Improve training and technical assistance • Refine research questions • Provide valid data for accountability and program improvement Early Childhood Outcomes Center
Are Measures Sensitive to Individual and Group Differences and Characteristics? • Essential feature of measurement is sensitivity to individual differences in child performance • Child characteristics • Principal exceptionality • Gender • Program or Regional Differences Early Childhood Outcomes Center
Frequency Distribution for one state’s three OSEP Outcomes for Part B Entry Early Childhood Outcomes Center
Frequency Distribution for one state’s three OSEP Outcomes for Part C Entry Early Childhood Outcomes Center
Interpreting Entry Rating Distributions • Entry rating distributions • If sensitive to differences in child functioning, should have children in every category • Should have more kids in the middle than at the extremes (1s and 7s) • 1s should reflect very severe exceptionalities • 7s are kids functioning at age level with no concerns- shouldn’t be many receiving services Early Childhood Outcomes Center
Social Entry Rating by State Early Childhood Outcomes Center
Interpreting Exit Ratings • Exit ratings • If distribution stays the same as at entry • Children are gaining at same rate as typical peers, but not catching up • If distribution moves “up”- numbers get higher • Children are closing the gap with typical peers • If ratings are still sensitive to differences in functioning, should still be variability across ratings Early Childhood Outcomes Center
Interpreting Social Exit Ratings Early Childhood Outcomes Center
How can we interpret changes in ratings over time? • Difference = 0: not gaining on typical peers, but still gaining skills • Difference > 0: gaining on typical peers • Difference < 0: falling farther behind typical peers • Would expect to see more of the first two categories than the last if system is effectively serving children Early Childhood Outcomes Center
Social Rating Differences by State Early Childhood Outcomes Center
Are a State’s OSEP Outcome Scores Sensitive to Progress Over Time? Examples from 2 States Early Childhood Outcomes Center
Distributions Across Knowledge and Skills Outcome at Entry and Exit Early Childhood Outcomes Center
Distributions Across Social Outcome at Entry and Exit Early Childhood Outcomes Center
Comparison of State Entry Outcome Data from 2007 and 2008 Early Childhood Outcomes Center
Importance of Looking at Exceptionality Related to Outcome • Ratings should reflect child exceptionality because an exceptionality affects functioning • DD ratings should generally be lower SL ratings because DD is a more pervasive exceptionality Early Childhood Outcomes Center
Meets Needs by Principal Exceptionality and COSF Rating Early Childhood Outcomes Center
Meets Needs by Principal Exceptionality and OSEP Category Early Childhood Outcomes Center
Interpreting Exceptionality Results • Different exceptionalities should lead to different OSEP categories • More SL in E (rated higher to start with- less pervasive and easier to achieve gains) • More DD in D (gaining, but still some concerns- more pervasive and harder to achieve gains) Early Childhood Outcomes Center
Gender Differences Ratings should generally be consistent across gender. If not, ratings or criteria might be biased. • Need to ensure that gender differences aren’t really exceptionality differences. • Some diagnoses are more common in one gender compared to the other. Early Childhood Outcomes Center
Entry Outcome Ratings by Gender Early Childhood Outcomes Center
Mean Differences and Ranges in the 3 Outcomes by Gender Early Childhood Outcomes Center
Gender and Exceptionality Early Childhood Outcomes Center
Importance of Exploring Gender Differences by Exceptionality • Because the same percentage of boys and girls are classified as DD and are classified as SL, rating differences are not the result of exceptionality differences. Early Childhood Outcomes Center
Program or Regional Differences in Distribution of Outcome Scores • If programs in different parts of the state are serving similar children, then ratings should be similar across programs • If ratings are different across programs with similar children, check assessment tools, training, meeting/team characteristics Early Childhood Outcomes Center
Program or Regional Differences in Distribution of Outcome Scores Early Childhood Outcomes Center
Are the 3 Outcomes Related? • Expect there to be patterns of relationships across functional outcomes compared to domains Early Childhood Outcomes Center
Correlations Across Outcomes at Entry Early Childhood Outcomes Center
Mean Correlations Between Assessment Outcomes on BDI and COSF Rating Correlation between COSF Outcome Ratings And BDI Domain Scores Social vs. PerSocial = .65 Knowledge vs. Cognitive = .62 Meets Needs vs. Adaptive = .61 Early Childhood Outcomes Center
Outcome Rating Differences by Measure • Use of different measures may be associated with different ratings because they provide different information • Different measures may also be associated with different Exceptionalities Early Childhood Outcomes Center
Mean Knowledge and Skills Outcome Differences as a Function of Measure Early Childhood Outcomes Center
Interpreting Team and Meeting Characteristics • Team characteristics • Team size and composition • Meeting characteristics • How teams meet • How parents are included Early Childhood Outcomes Center
Team Composition Video: 93% 2-4 professionals Survey: 85% 2-4 professionals * 35% SLP, 30% ECE * 95% SLP, 70% ECE Early Childhood Outcomes Center
How Do Teams Complete Outcome Information? • Do teams meet to determine ratings? (survey) • 41% always meet as a team • 42% sometimes meet as a team • 22% members contribute, but one person rates • 5% one person gathers all info and makes ratings • How teams meet at least sometimes (survey) • In person: 92% • Phone: 35% • Email: 33% Early Childhood Outcomes Center
What Does Team Information Provide that is Helpful for Quality Assurance? • COSF process is intended to involve teams- happens some of the time • Teams are creative in how they meet- likely due to logistical constraints • Checks the fidelity of the system (if it’s being used as planned) • If we know how teams are meeting, can modify training to accommodate Early Childhood Outcomes Center
Decision-Making Process Followed by Teams • Decision-making process: • Standardized steps • Consensus reached by teams • Deferring to a leader Early Childhood Outcomes Center
What Steps Did Teams Use to Make Decisions? • Use of crosswalks (survey) • 59% reported that their team used • 94% reported using to map items and sections COSF outcomes. • ECO decision tree use • Video- 95% • 6% without discussing evidence (yes/no at each step) • Discuss evidence at each step, rate document • Discuss and document at each step • Survey- 81% Early Childhood Outcomes Center