220 likes | 340 Views
Massachusetts Educator Evaluation System. Paul Toner, MTA, President Heather Peske, ESE, Associate Commissioner for Ed Quality Teachers Union Reform Network Conference November 1, 2013. Agenda for Today. History Description of Massachusetts’ Educator Evaluation Framework
E N D
Massachusetts Educator Evaluation System Paul Toner, MTA, President Heather Peske, ESE, Associate Commissioner for Ed Quality Teachers Union Reform Network Conference November 1, 2013
Agenda for Today • History • Description of Massachusetts’ Educator Evaluation Framework • Focus on the role of student growth in the system • Lessons Learned (so far!) • Discussion
History & Context Massachusetts Context • 2 unions (MTA/NEA and AFTMA) • About 400 districts (including charter schools) • 1800 schools • 79,000 teachers • 950,000 students
History • Pre-RTTT: ESE begins discussion about re-vamping teacher evaluation to include student learning outcomes. • RTTT announced: 2009 • Achievement Gap Bill: January 2010 • MTA Annual Meeting: Yes, we should do RTTT. • 40-person Ed Eval Task Force announced (Paul and Heather serve) • MTA unveils proposed Evaluation Framework (included student learning outcomes as part of a multi-measure evaluation): December 2010 • Board approves Educator Evaluation Framework: June 2011 (includes much of MTA Framework)
With its emphasis on professional judgment, the Massachusetts model gives evaluators more flexibility in determining individual performance ratings than they would otherwise have under a system that imposes numerical weights or values to individual components of an evaluation. In contrast to formulaic systems that calculate ratings based on set values or percentages, this system allows evaluators to be responsive to local context or individual needs, emphasize trends and patterns of practice rather than rely on individual data points, and better target feedback and resources to individual educators. All of these factors contribute to a more holistic, comprehensive assessment of educator practice that is designed to promote an ongoing cycle of continuous improvement. This system also assumes at its heart that educators are professionals with critical knowledge, skills, and judgment necessary to make each and every evaluation meaningful and productive. --ESE Guidance
To determine the Inform judgments about Sources of Evidence Practice Goal Curriculum, Planning, Assessment Teaching All Students Practice Family Engagement Professional Culture Learning StudentLearning Goal Engagement Summative Performance Rating Summative/ Formative Evaluation Rating
Student Impact Rating • Evaluators must assign a rating based on trends (at least 2 years)and patterns (at least 2 measures) • Options: • Statewide growth measure(s) must be used, where available (MCAS SGP for math and ELA, grades 3-10) • District-determined Measure(s) of student learning comparable across grade or subject district-wide (can be off-the-shelf or teacher or district-created or chosen from exemplars from ESE).
DDM Key Questions • Is the measure aligned to content? • Does it assess what the educators intend to teach and what’s most important for students to learn? • Is the measure informative? • Do the results tell educators whether students are making the desired progress, falling short, or excelling? • Do the results provide valuable information to schools and districts about their educators?
DDMs: 4 Key Messages • DDMs are part of a multidimensional framework. • Decisions about an educator’s impact or effectiveness will never be based of the results of a single DDM. • Focus is on students, not just educators. • DDMs must yield information that will be useful to educators in improving student outcomes. • This is about building capacity. • DDMs provide districts a good reason to consider ways to refine and improve existing assessment practices. • Teachers have the necessary skills to lead the process of identifying DDMs – You can do this! • Many districts will have success leveraging teacher-developed assessments to develop DDMs.
Lessons Learned (so far!) • Educators must be at the table. • Trust is critical. • Leadership is critical. • Sometimes, we disagree without being disagreeable. • Sometimes, we have to have a sense of humor.
Lessons Learned (so far!), cont’d… • We both have stakeholders to whom we must respond. • Sometimes, we have to compromise. • The perfect cannot be the enemy of the good. • The outcomes will be better if we work together.
And finally, • The outcomes will be better if we work together.
Resources • ESE website on Educator Evaluation • http://www.doe.mass.edu/edeval/ • MTA website on Educator Evaluation and Toolkit • http://www.massteacher.org/news/archive/2012/evaluation_guidance.aspx • http://www.massteacher.org/advocating/Evaluation.aspx
Discussion and Thanks! • Paul Toner, President, MTA • ptoner@massteacher.org • 617.878.8214 (office line) • Heather Peske, Associate Commissioner for Educator Quality, ESE (Massachusetts) • hpeske@doe.mass.edu • 781.605.5162 (cell)