220 likes | 613 Views
The Framework for Teaching Charlotte Danielson. Why Assess Teacher Effectiveness?. Quality AssuranceProfessional Learning. Assessing Teacher Effectiveness, Charlotte Danielson. Defining Effective Teaching. Two basic approaches:Teacher practices, that is, what teachers do, how well they do the
E N D
1. Assessing Teacher Effectiveness Charlotte Danielson
http://charlottedanielson.com/theframeteach.htm
charlotte_danielson@hotmail.com
2. The Framework for Teaching Charlotte Danielson Why Assess Teacher Effectiveness?
Quality Assurance
Professional Learning
3. Assessing Teacher Effectiveness, Charlotte Danielson Defining Effective Teaching
Two basic approaches:
Teacher practices, that is, what teachers do, how well they do the work of teaching
Results, that is, what teachers accomplish, typically how well their students learn
4. Assessing Teacher Effectiveness, Charlotte Danielson Defining What Teachers Do Two basic approaches:
As judged by internal assessors, within the school or district, based on specific criteria
As judged by external assessors, for example National Board Certification
5. Assessing Teacher Effectiveness, Charlotte Danielson Assumptions of Defining Good Teaching Based on What Teachers Do There is consensus on what excellent teachers do, that is, on standards of practice
Teachers and administrators can accurately recognize exemplary practice in different contexts
School leaders have the skills to promote excellent teaching with their teachers
These assumptions are difficult, but not impossible, to realize.
6. Assessing Teacher Effectiveness, Charlotte Danielson Teacher Evaluation System Design System Design
Given what I have said thus far, we can think of two continua related to evaluation systems: one related to the level of stakes, (in the form of licensing, employment, or compensation) and the other concerning the rigor of the system (the clarity of the criteria, the design of the items to be assessed, the training of the assessors, etc.) If one maps one continuum on the other, the result is a graph with four quadrants like this one. (Show the graph.)
In the quadrant where both the stakes and the rigor are low (for example in most mentoring programs) there are no negative consequences of the low rigor. That is, the mentoring program may not be as good as it might be, but no one is harmed. Those systems with both high stakes and high rigor (for example, where the assessors go through extensive training and must pass a proficiency test - as in Praxis III and National Board) the result is a system with high levels of credibility and defensibility.
The difficulty arises, I think, where the system has high stakes but low rigor (and therefore low defensibility and credibility.) In those situations there is opportunity for harm, and mischief, and abuse. Those are the ones that really worry me. I also wonder whether the infrastructure required to establish, and maintain, a system of high rigor, is worth the benefits. It will be interesting to see the situations in which it turns out to be worth it.System Design
Given what I have said thus far, we can think of two continua related to evaluation systems: one related to the level of stakes, (in the form of licensing, employment, or compensation) and the other concerning the rigor of the system (the clarity of the criteria, the design of the items to be assessed, the training of the assessors, etc.) If one maps one continuum on the other, the result is a graph with four quadrants like this one. (Show the graph.)
In the quadrant where both the stakes and the rigor are low (for example in most mentoring programs) there are no negative consequences of the low rigor. That is, the mentoring program may not be as good as it might be, but no one is harmed. Those systems with both high stakes and high rigor (for example, where the assessors go through extensive training and must pass a proficiency test - as in Praxis III and National Board) the result is a system with high levels of credibility and defensibility.
The difficulty arises, I think, where the system has high stakes but low rigor (and therefore low defensibility and credibility.) In those situations there is opportunity for harm, and mischief, and abuse. Those are the ones that really worry me. I also wonder whether the infrastructure required to establish, and maintain, a system of high rigor, is worth the benefits. It will be interesting to see the situations in which it turns out to be worth it.
7. Assessing Teacher Effectiveness, Charlotte Danielson Teacher Evaluation System Design System Design
Given what I have said thus far, we can think of two continua related to evaluation systems: one related to the level of stakes, (in the form of licensing, employment, or compensation) and the other concerning the rigor of the system (the clarity of the criteria, the design of the items to be assessed, the training of the assessors, etc.) If one maps one continuum on the other, the result is a graph with four quadrants like this one. (Show the graph.)
In the quadrant where both the stakes and the rigor are low (for example in most mentoring programs) there are no negative consequences of the low rigor. That is, the mentoring program may not be as good as it might be, but no one is harmed. Those systems with both high stakes and high rigor (for example, where the assessors go through extensive training and must pass a proficiency test - as in Praxis III and National Board) the result is a system with high levels of credibility and defensibility.
The difficulty arises, I think, where the system has high stakes but low rigor (and therefore low defensibility and credibility.) In those situations there is opportunity for harm, and mischief, and abuse. Those are the ones that really worry me. I also wonder whether the infrastructure required to establish, and maintain, a system of high rigor, is worth the benefits. It will be interesting to see the situations in which it turns out to be worth it.System Design
Given what I have said thus far, we can think of two continua related to evaluation systems: one related to the level of stakes, (in the form of licensing, employment, or compensation) and the other concerning the rigor of the system (the clarity of the criteria, the design of the items to be assessed, the training of the assessors, etc.) If one maps one continuum on the other, the result is a graph with four quadrants like this one. (Show the graph.)
In the quadrant where both the stakes and the rigor are low (for example in most mentoring programs) there are no negative consequences of the low rigor. That is, the mentoring program may not be as good as it might be, but no one is harmed. Those systems with both high stakes and high rigor (for example, where the assessors go through extensive training and must pass a proficiency test - as in Praxis III and National Board) the result is a system with high levels of credibility and defensibility.
The difficulty arises, I think, where the system has high stakes but low rigor (and therefore low defensibility and credibility.) In those situations there is opportunity for harm, and mischief, and abuse. Those are the ones that really worry me. I also wonder whether the infrastructure required to establish, and maintain, a system of high rigor, is worth the benefits. It will be interesting to see the situations in which it turns out to be worth it.
8. Assessing Teacher Effectiveness, Charlotte Danielson Defining What Teachers Accomplish
Typically linked to student achievement on state-wide assessments
Because of the importance of out-of-school factors, validity and equity demand value-added measures
Recent approaches encourage classroom-based assessments, school/district end-of-course exams, etc.
9. Assessing Teacher Effectiveness, Charlotte Danielson Assumptions of Defining Good Teaching Based on Student Test Scores Available assessments include all valuable learning
Assessments are available for all teachers
In preparing students for the assessments, teachers will use good instructional strategies
(That is, teaching to the test is good teaching)
Statistical techniques can attribute student learning to individual teachers
These assumptions are questionable
10. Assessing Teacher Effectiveness, Charlotte Danielson Negative Consequences of Defining Effectiveness Based on Test Scores Even if the assumptions are satisfied, and especially if the stakes are high:
Cheating, by teachers or administrators
Narrowing the curriculum to what is assessed, and the manner in which it is assessed
If student achievement is defined as the percentage who exceed a standard, teachers concentrate their efforts on those close to the line, shortchanging others
shorshor
11. Unintended (but negative) Consequences of Assessing Teacher Practice In their concern to look good on the rubric, especially if the stakes are high:
Teachers become legalistic, parsing the words, defending their performance
Teachers adopt a low-risk approach, not willing to try new approaches
Teachers are unwilling to accept challenging students in their classes
Teachers may be reluctant to share materials, expertise, etc.
12. Assessing Teacher Effectiveness, Charlotte Danielson Unintended (but positive) Consequences of Assessing Teacher Practice Training for teachers and assessors encourages them to better understand good teaching
Results of the assessment provide specific feedback for teachers on where they should focus their improvement efforts
The assessment procedures them selves can promote professional learning
13. Assessing Teacher Effectiveness, Charlotte Danielson Contributors to Teacher Learning
Self-assessment
Refection on practice
Professional conversation
All done in an environment of trust
14. Assessing Teacher Effectiveness, Charlotte Danielson Defining What Teachers DoThe Four Domains
Domain 1: Planning and Preparation
Domain 2: The Classroom Environment
Domain 3: Instruction
Domain 4: Professional Responsibilities
15. Assessing Teacher Effectiveness, Charlotte Danielson The Framework for TeachingSecond Edition