310 likes | 420 Views
The New MA Educator Evaluation Framework: District-Determined Measures and Student and Staff Feedback. ASE June Statewide Conference June 10, 2013 Ron Noble Educator Evaluation Project Co-Lead. Agenda. Setting the Stage 2012-2013 Implementation Lessons Learned On the Horizon
E N D
The New MA Educator Evaluation Framework: District-Determined Measures andStudent and Staff Feedback ASE June Statewide Conference June 10, 2013 Ron Noble Educator Evaluation Project Co-Lead
Agenda • Setting the Stage • 2012-2013 Implementation • Lessons Learned • On the Horizon • District-Determined Measures • Student and Staff Feedback • Q&A 2 Massachusetts Department of Elementary & Secondary Education
Setting the Stage When policy and practice must move faster than research and development, where do you begin? ESE Philosophy: • Don’t let perfection be the enemy of good: the work is too important to delay. • Understand this is just the beginning: we will be able to do this work with increasing sophistication each year • Phased-in implementation: take advantage of emerging research, resources, and feedback from the field.
Questions for Policy Makers: Are variations in contributions measurable? How should we use the MCAS Alternate Assessment? How do we differentiate without creating “two systems”? Who IS the evaluator? • Attribution: “When crediting teachers for student learning, how should the individual contributions of teachers acting in a coteaching or consultant role be determined? • Assessments: “How can the contributions to student achievement be accurately measured for teachers instructing special populations for which alternative standards and/or assessments are used?” • Educator differentiation: “Are the key features of teacher effectiveness for specialized personnel, such as special education teachers different… and should those unique features lead to additional or different content on observation protocols, student growth assessments, or alternative instruments?” • Evaluator training: “When rating special education teachers…using an observation protocol or alternative instrument, what special training, if any, do evaluators need?” Holdheide, L.R., Goe, L., & Reschly, D.J.. (2010) Challenges in Evaluating Special Education Teachers and English Language Learner Specialists. National Comprehensive Center for Teacher Quality.
2012-2013 Implementation • 234 Race to the Top Districts • At least 50% of educators • Summative Performance Rating only • 5-Step Evaluation Cycle • June data reporting (EPIMS) • 6 data elements: • Rating on Standard I • Rating on Standard II • Rating on Standard III • Rating on Standard IV • Overall Summative Performance Rating • Professional Teacher Status (Y/N)
5 Step Evaluation Cycle • Every educator is an active participant in an evaluation • Process promotes collaboration and continuous learning • Process applies to all educators
Educator Evaluation Spring Convening: Connecting Policy, Practice, and Practitioners • May 29, 2013 • Over 700 participants from district teams (RTTT and non-RTTT) and educator preparation programs • Key messages: • Integrate with other key district initiatives • Opportunity to strengthen labor-management relations • Albeit difficult, it’s the right work
On the Horizon • District-Determined Measures • Student and Staff Feedback
District-Determined Measures: Key Terms • Student Impact Rating – a rating of high, moderate, or low for an educator’s impact on student learning • District-Determined Measures – measures of student learning, growth, and achievement that will inform an educator’s Student Impact Rating
Student Impact Rating Regulations • Evaluators must assign a rating based on trends (at least 2 years)and patterns (at least 2 measures) • Options – 603 CMR 35.07(1)(a)(3-5) • Statewide growth measure(s)* • District-determined Measure(s) of student learning comparable across grade or subject district-wide. • For educators whose primary role is not as a classroom teacher, the appropriate measures of the educator's contribution to student learning, growth, and achievement set by the district.
Student Impact Rating Regulations • Why focus on growth? • Level playing field • Fairness • Achievement measures may be acceptable when the district judges them to be the most appropriate/feasible measure for certain educators
Revised Implementation Timeline • Commissioner’s Memo - 4/12/13 • 2013-2014 – districts pilot and identify DDMs • 2014-2015 – districts implement DDMs and collect the first year of trend data • 2015-2016 – districts collect the second year of trend data and issue Student Impact Ratings for all educators • Districts positioned to accelerate the timeline should proceed as planned. • Guidance and resources to support districts with the identification of DDMs are available here: http://www.doe.mass.edu/edeval/ddm/
Revised Implementation Timeline • Minimum Piloting Requirements • Early grade (K-3) literacy • Early (K-3) grade math • Middle grade (5-8) math • High school writing to text • Traditionally non-tested grades and subjects (e.g., fine arts, music, physical education) • If a district is unable to identify a DDM in the grades and subjects listed above, the district must pilot one of ESE’s exemplar DDMs to be released in summer 2013.
Recommended Steps for Districts • Identify a team of administrators, teachers and specialists to focus and plan the district’s work on District-Determined Measures. • Complete an inventory of existing assessments used in the district’s schools. • Identify and coordinate with partners that have capacity to assist in the work of identifying and evaluating assessments that may serve as District-Determined Measures. Quick Reference Guide: District-Determined Measures
ESE Supports • WestEd is supporting ESE with next steps in implementing the Commonwealth’s Model System for Educator Evaluation • Two broad categories of work • Support development of anchor standards in almost 100 separate grades/subjects or courses • Identification and evaluation of promising measures, tools, tests, rubrics • Work to be completed by mid-August
ESE Supports • Supplemental guidance on the selection of DDMs and the process of determining an Impact Rating • DDM and Assessment Literacy Webinar Series (March – December) • Technical Guide A (released in May 2013) focuses on selecting high quality assessments • Includes Assessment Quality Checklist and Tracking Tool • Technical Guide B (expected in August 2013) will focus on measuring growth.
ESE Supports Click here to transfer assessment information to Tracker
DDMs: Request for Feedback • Attribution: How can ESE best support districts in developing attribution policies related to the determination of Student Impact Ratings, particularly for coteachers, consulting teachers, and other scenarios where more than one teacher contributes to student learning, growth, and achievement? • Movement of Students: Due to highly specialized and often changing needs, the population of children identified as needing special education services fluctuates annually, sometimes in significant amounts, and mostly in the elementary grades. This fluctuation means students move in and out of special education classes and may not receive special education instruction for an entire year. How should ESE recommend districts take student movement into account when determining special educators’ Student Impact Ratings? • Selecting Assessments: What are some considerations ESE should be aware of when providing guidance on the selection of measures of student growth to be used in the determination of special educators’ Student Impact Ratings? Please include specific examples of measures that would or would not be appropriate and why.
Student and Staff Feedback • Revised Implementation Timeline: Beginning in the 2014-2015 school year, districts will include student feedback in the evaluation of all educators and staff feedback in the evaluation of all administrators. • During the 2013-2014 school year, ESE will work with districts to pilot/field test model survey instruments.
National Overview A growing number of states are currently using or preparing to use student surveys in educator evaluations • Idaho • Kentucky • Maine • Massachusetts • Michigan • Missouri • Mississippi • New Jersey • New York • North Carolina • Rhode Island • Washington • Alaska • Arizona • Colorado • Delaware • Georgia • Hawaii
Why Use Student Surveys in Educator Evaluations? 25 • Perception surveys round out a multiple measure evaluation system • Research also finds student surveys are correlated with student achievement • The Measures of Effective Teaching Project found students’ perceptions are reliable, stable, valid, and predictive • Surveys may be the best gauge of student engagement • When asked which measures are good or excellent at assessing teacher effectiveness, teachers reported • District standardized tests (56 percent) • Principal feedback (71 percent) • Students’ level of engagement (92 percent)
What Students Say… • MA’s State Student Advisory Council and six regional student advisory councils provide a unique feedback loop for students • MA Student Advisory Council focus groups were overwhelmingly positive toward soliciting their input through student surveys • MA students want to help teachers improve • MA students are excited about the prospect of being surveyed for this purpose • MA students offered thoughtful precautions about survey use: • Use surveys for teacher goal-setting • Consider making survey feedback visible only to teachers • Provide 3rd party screeners of any open-ended questions
Surveys as a Form of Feedback • Considerations When Using Surveys of Classroom/School Experiences • Students may lack cognitive ability or maturity • Could become a popularity contest or “rate-your-teacher.com” • Survey results could be misused by evaluators • Benefits of Surveys of Classroom/School Experiences • Offers valuable insight from those with first-hand experience • Empowers and engages survey recipients, sending a signal that their input is valued • Comparatively inexpensive
National Perspective – Lessons Learned • The more immediate the feedback the better • The more flexibility for teachers to administer surveys when they wish the better • Surveys for early grades and special populations require special attention • To the extent that surveys are used for high stakes decisions at all, this should not happen until after they have been used effectively and reliably, and educators have grown comfortable with them, in a low stakes setting • When used for formative purposes, surveys are generally seen as a good thing
Perspectives & Considerations • Key areas for state or district consideration: • 1. Determining survey samples • 2. Timing of survey administration • 3. Reporting of survey results • 4. Using survey results in evaluations • 5. Considerations for pre-readers, special education, and English Learners
Student and Staff Feedback: Request for Feedback • Source of Evidence: In what way or ways should ESE recommend student and staff feedback be used as a source of additional evidence relevant to one or more Performance Standards? • Accommodations: What types of arrangements are most appropriate for the special populations, i.e., pre-readers, students with limited English proficiency, and students with disabilities, so that their feedback can be taken into account as well? • Data Collection Tools: In addition to perception surveys, what other types of data collection tools for capturing student feedback should ESE recommend and for what populations would these tools be most useful?
Additional Questions? • Ron Noble – rnoble@doe.mass.edu or 781.338.3243