200 likes | 303 Views
Focus Schools and Special Education Centers. Presentation to MAASE October 10, 2012 Venessa A. Keesler, Ph.D. Bureau of Assessment and Accountability. Taking a Step Back: Why do we do accountability?. Three myths; one reality Myth #1: To drive reform Myth #2: To create education policy
E N D
Focus Schools and Special Education Centers Presentation to MAASE October 10, 2012 Venessa A. Keesler, Ph.D. Bureau of Assessment and Accountability
Taking a Step Back: Why do we do accountability? • Three myths; one reality • Myth #1: To drive reform • Myth #2: To create education policy • Myth #3: Because we are gluttons for punishment • Reality: • Accountability metrics/systems are quantitative articulations of the core policy beliefs of the education system • They help us measure our progress in meeting those core policy goals • They are the measure, not the purpose or the goal
Accountability Landscape: 2012 • A new era of accountability • Switching from a purely criterion-based system to a normative system • Criterion-based systems: Set average proficiency targets for schools. • Normative system: identifies the “worst” or “best” or “lowest” or “highest”
Why the change? Policy imperative for NCLB: all students CAN and SHOULD demonstrate proficiency criterion-system with proficiency targets for all schools and subgroups 10 years later: our average achievement is increasing, but we still have students and schools lagging behind New policy imperative (ESEA Flex): we must target our lowest performing schools AND our lowest performing students more specifically and strategically
Why Focus Schools? • Different metric addresses a different policy goal • Policy goal to shine new light on the lowest performing students within schools • Priority Schools = lowest performing schools overall • Focus Schools = largest within-school gaps
Intersection with Policy Regarding Students With Disabilities • “All means all” • Michigan believes all students should have access to high-quality instruction and rigorous content; and that we must have high expectations for all students • So—the accountability articulation of this core policy belief is to include ALL students and ALL schools in the metrics
Quick Reference for Z-Scores What is a Z-Score?
Why do We Use Z Scores? Z-scores are a standardized measure that helps you compare individual student (or school) data to the state average data (average scores across populations). Z-scores allow us to “level the playing field” across grade levels and subjects Each Z-score corresponds to a value in a normal distribution. A Z-Score will describe how much a value deviates from the mean. What do you need to know: Z-scores are used throughout the ranking to compare a school’s value on a certain component to the average value across all schools.
What is a Z-Score? State Average …Worse than state average Better than state average…. 1 -3 -2 0 2 3 -1 Z-scores are centered around zero Positive numbers mean the student or school is above the state average Negative numbers mean the student or school is below the state average
Z-Score Examples Z-score of 1.5 State Average …Worse than state average Better than state average…. 1 -3 -2 0 2 3 -1 Your school has a z-score of 1.5. You are better than the state average.
Z-Score Examples Z-score of 0.2 Z-score of 1.5 State Average …Worse than state average Better than state average…. 1 -3 -2 0 2 3 -1 Your school has a z-score of .2. You are better than the state average, but not by a lot.
Z-Score Examples Z-score of -2.0 Z-score of 0.2 Z-score of 1.5 State Average …Worse than state average Better than state average…. 1 -3 -2 0 2 3 -1 Your school has a z-score of -2.0. You are very far below state average.
How do we get Standardized Scale Scores for Each Student? • Step #1: Take each student’s score on the test they took and compare that score to the statewide average for students who took that same test in the same grade and year • This creates a student-level z-score for each student in each content area • Compare • MEAP to MEAP • MEAP-Access to MEAP-Access • MME to MME • MI-Access • Participation to Participation • Supported Independence to Supported Independence • Functional Independence to Functional Independence
What do we do with those standardized scores? • Step #2: Once each student has a z-score for each content area (based on the test they took), we take all of the students in a each school, and rank order the students within the school. • Z-scores will have come from different tests, and compare students to statewide average for that grade, test, and subject • But they can now be combined for the school • Step #3: Add up all z-scores and take the average. This is now the average standardized student scale score. • Step #4: Define the top and bottom 30% subgroups, based on that rank ordering.
Average Z-score (average standardized student scale score): 0.28 (sum all z-scores, divide by 15)
Top 30% Bottom 30%
Implications for SWDs and Center Programs • Students compared only with other students who took the same assessment (Participation to Participation, etc.) • All schools treated the same • Not that center programs have a gap; but that they have some of the largest gaps • Don’t assume the bottom 30% is only one type of student; can look at student data file
Final Point • The accountability system will not pick and choose between students and/or schools; it will apply the same rules to all students/schools • Accountability system does not decide when to deviate from this; core educational policy does • Need to continue to work to make sure that metrics mirror policy goals
Contact Information Venessa A. Keesler, Ph.D. Evaluation, Research and Accountability Bureau of Assessment and Accountability keeslerv@michigan.gov or mde-accountability@michigan.gov