400 likes | 406 Views
Chapter 3 Rubrics. C H A P T E R. 3. Rubrics. What Is a Rubric?. The criteria used for judging students’ performance Guidelines by which a product is judged (Wiggins 1996) Explain the standards for acceptable performance (Herman, Aschbacher, and Winters 1996)
E N D
Chapter 3 Rubrics C H A P T E R 3 Rubrics
What Is a Rubric? • The criteria used for judging students’ performance • Guidelines by which a product is judged (Wiggins 1996) • Explain the standards for acceptable performance (Herman, Aschbacher, and Winters 1996) • Ways for teachers to communicate expectations for students’ performance
Benefits of Using Rubrics • Increase the consistency of scoring. • Rubrics can improve instruction. • They let students know what is important. • They help students make sense of the learning.
Criteria for Rubrics • Clear criteria are essential in performance-based assessments so students’ work can be judged consistently (Arter 1996). • Product criteria are what students produce or what they are able to do at a certain time. • Process criteria are critical elements necessary for correct performance. • Progress criteria indicate how much students have improved on a learning curve. • When using product criteria to judge student performance also include process criteria.
Simple Scoring Guides • Checklists (performance lists) contain a list of criteria that must be seen for performance or product to meet expectations. • Good for peer or self-assessments. • Scorers concerned only about demonstration of the characteristic. • Keep items on a checklist to a minimum—too many make the checklist difficult to use. (continued)
Simple Scoring Guides (continued) • When using peer checklists, teach the students how to observe. • Make sure that students understand what all the terms mean. • Show examples of the skill being done correctly. • Show examples of what an error looks like. • Emphasize that the student is trying to help the partner improve skill, not improve just for a grade. • Younger students should have fewer items.
Point System Scoring Guides • Similar to checklists but they award points for the various criteria on the list. • There’s no judgment of quality. • Offers students feedback based on points. • Teachers can add up points earned and convert to a grade. • More points can be awarded for a certain trait to provide some emphasis.
Analytic Rubrics • Used with formative assessments. • Quantitative analytic rubrics require the scorer to give a numerical score. • Traits are evaluated for quality. • Use words such as never, sometimes, usually, or always. • Qualitative analytic rubrics provide verbal descriptions for each level of various traits to be evaluated. • Numbers are easier to remember than verbal descriptions.
Holistic Rubrics • Consist of paragraphs written to describe various levels of performance. • Scorers decide which level best describes the overall quality of students’ work. • Faster to use than analytic rubrics. • Useful for assigning grades. • Used for summative evaluations or large-scale testing. • Give limited feedback to students.
Generalized and Task-Specific Rubrics • Generalized rubrics are used to assess a variety of assessments. • Tend to look at big-picture characteristics. • Game play rubric looks at elements common to all games of same type (e.g., net games). • Task-specific rubrics contain criteria unique to the assessment. • Targets specific knowledge or behaviors teachers want to observe.
Deciding How Many Criteria to Include • Simpler assessments require less complex rubrics. • Include all essential elements and hold the extras. • Descriptors, traits, characteristics are all terms used to indicate elements on a rubric. • Criteria must distinguish between useful indicators and genuine criteria (Wiggins 1998a). (continued)
Deciding How Many Criteria to Include (continued) • Include all important components of performance. • Avoid details that are insignificant. • Write in language that the user can understand. • Take into account contextual variables. • Link to instructional objectives. • Reflect best practice or professional opinion. • Address every task and component of the assessment.
Deciding How Many Levels to Write • A rubric should have a minimum of four levels. • Developmental rubrics are used for all grade levels and use several levels to show continuum of performances. • Might be used for statewide assessment. • Writing an even number of levels eliminates the temptation to select the middle score (Wiggins 1996).
How to Create Rubrics • Identify what needs to be assessed. • Envision desired student performance. • Identify the main characteristics. • Develop descriptors for a quantitative rubric. • Pilot the assessment. • Develop levels for the rubric. • Create a rubric for students. • Administer the assessment. • Revise the rubric.
Step 1: Identify What Needs to Be Assessed • What is the purpose of this assessment? • What am I assessing? • What information do I need from this assessment? • Review instructional goals. • If teachers fail to clearly identify content, purpose, and information the assessment is to provide, the resulting rubric will be deficient.
Step 2: Envision the Desired Student Performance • Must decide what will be accepted as evidence that the objectives have been met. • Develop mental picture of expected student performance. • Helps establish a target for final product. • Having a clear picture of final expectations will make subsequent steps easier.
Step 3: Identify the Main Characteristics • Brainstorm several words that could become descriptors for rubric. • Write short sentences or phrases that indicate student competency. • Evaluate entire list and group similar criteria.
Step 4: Develop Descriptors (for Quantitative Rubrics) • Develop descriptors for levels that apply to all statements in quantitative rubric. • For example, the words sometimes, often, always, never describe various levels. (continued)
Step 4: Develop Descriptors (for Quantitative Rubrics) (continued)
Step 5: Pilot the Assessment and Rubric • Will help determine if correct descriptors were chosen. • Provides teacher with examples of students’ work for developing levels for qualitative or holistic rubrics. • Use rubric to provide feedback to students, not for a grade during the pilot. • Pilot is necessary for seeing if criteria were set at an appropriate level of difficulty.
Step 6: Develop Levels (for Qualitative Rubrics) • Condense the list of descriptors. Group those alike and create essential categories. • Target approximately six characteristics. • Easiest if levels emerge from student products obtained from the pilot. (continued)
Step 6: Develop Levels (for Qualitative Rubrics) (continued) Levels for a rubric should do the following: • Distinguish between best performance and least successful. • Use descriptors that provide a true distinction between performance levels. • Explain to students what is expected. • Focus on quality, not quantity. • Avoid comparison words such as more than, better, or worse.
Step 7: Create a Separate Rubric for Students (for Qualitative Rubrics) • Only for qualitative rubrics. • Depends on the complexity of the assessment. • Separate rubric may need to be written more clearly for students without revealing information that is being assessed.
Step 8: Administer the Assessment • Administer assessment to students. • Use it to evaluate students’ achievement. • Student rubric should be attached to assessment.
Step 9: Revise the Rubric • Look at students’ performance. • Look for ways to strengthen the rubric. • Continue to revisit rubrics and criteria to refine and improve them.
Special Considerations to Address When Creating Rubrics • Validity • Reliability • Transparency • Subjectivity Without attention to these items, the value of assessment can be jeopardized.
Validity • Valid assessments measure what they intend to measure. • Performance-based assessments tend to have strong face validity. • Validity problems arise if the rubric includes the wrong characteristics or omits items.
Reliability • Assessments are reliable when they consistently produce the same results. • Scores from different evaluators should be similar. • The more clear and simple the rubric, the greater the reliability (Wiggins 1996). • Drift is change of criteria between scorers. • Reliable rubrics prevent problems of drift because scorer has something concrete to reference.
Transparency • Rubrics have to have enough detail so that evaluators can score reliably. • Often includes putting some answers in the scoring guide. • Teacher should create a separate scoring guide for students when the scorer’s rubric contains too much information.
Subjectivity • Subjectivity is the amount of judgment used in assigning a score to a student’s test performance. • Subjective tests require judgment, analysis, and reflection on the part of the scorer (Fredrickson and Collins 1989). • Involves making decisions based on professional judgment. • Professional judgment is the more accurate term than subjectivity to describe evaluation of performance-based assessments. (continued)
Subjectivity (continued) With training, people can use professional judgment to evaluate performance-based assessments in a valid and reliable manner.
Rubric Guidelines • Use samples of students’ work. • Give rubrics with the assessment. • Have students rewrite the rubric. • Allow for multiple correct answers. • Limit the scope of the assessment. • Avoid hypergeneral rubrics. • Write levels of difficulty for cognitive assessments. • Adjust the rubric after the assessment has been scored, not during.
Use Samples of Students’ Work • Exemplars are samples of student work that demonstrate acceptable or unacceptable performance. • Help explain why a score was assigned. • The scorer has concrete examples with which to compare other student’s work. • Anchor is a student’s work that gives evaluators something to ground their evaluations on.
Give Rubrics With the Assessment • Give students criteria that evaluate the assessment when the task is presented. • Help students understand teacher expectations. • Student achievement is greater.
Have Students Rewrite the Rubric • Discussing rubrics helps students to clarify what is expected. • Have students develop checklists to help understand the expectations. • Students should not be expected to write the rubric for complex assessments.
Allow for Multiple Correct Answers • Teachers must be willing to accept alternative interpretations and performance within the scope of the criteria established. • Rubrics should never mandate the process, format, method, or approach that students are to use to complete the assessment (Wiggins 1998a).
Limit the Scope of the Assessment • Rubric must capture the range of correct responses without allowing just any answer to be appropriate. • If the rubric is complex and too difficult to use, the assessment should be narrowed.
Avoid Hypergeneral Rubrics • If the criteria is too general to distinguish the levels, the rubric is of little value. • They don’t help students learn and are susceptible to poor reliability by the person using it to evaluate (Popham 2003). • The unacceptable level is where teachers struggle with hypergeneral descriptions.
Writing Levels of Difficulty for Cognitive Assessments • Teachers must address each level when writing a rubric. • The more levels written, the smaller the difference between levels. • Do not have more levels than necessary. • Use students’ work to help determine the number of levels needed.
Adjust the Rubric After the Assessment Has Been Scored, Not During • Even if omissions or errors have been made on the rubric, continue to evaluate using the criteria given to the students. • Adjust the rubric after the assessment and make any necessary changes for future use. • Changes in the rubric must also be addressed in the criteria given to students for a specific task.