1 / 40

Chapter 3 Rubrics

Chapter 3 Rubrics. C H A P T E R. 3. Rubrics. What Is a Rubric?. The criteria used for judging students’ performance Guidelines by which a product is judged (Wiggins 1996) Explain the standards for acceptable performance (Herman, Aschbacher, and Winters 1996)

dunning
Download Presentation

Chapter 3 Rubrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Rubrics C H A P T E R 3 Rubrics

  2. What Is a Rubric? • The criteria used for judging students’ performance • Guidelines by which a product is judged (Wiggins 1996) • Explain the standards for acceptable performance (Herman, Aschbacher, and Winters 1996) • Ways for teachers to communicate expectations for students’ performance

  3. Benefits of Using Rubrics • Increase the consistency of scoring. • Rubrics can improve instruction. • They let students know what is important. • They help students make sense of the learning.

  4. Criteria for Rubrics • Clear criteria are essential in performance-based assessments so students’ work can be judged consistently (Arter 1996). • Product criteria are what students produce or what they are able to do at a certain time. • Process criteria are critical elements necessary for correct performance. • Progress criteria indicate how much students have improved on a learning curve. • When using product criteria to judge student performance also include process criteria.

  5. Simple Scoring Guides • Checklists (performance lists) contain a list of criteria that must be seen for performance or product to meet expectations. • Good for peer or self-assessments. • Scorers concerned only about demonstration of the characteristic. • Keep items on a checklist to a minimum—too many make the checklist difficult to use. (continued)

  6. Simple Scoring Guides (continued) • When using peer checklists, teach the students how to observe. • Make sure that students understand what all the terms mean. • Show examples of the skill being done correctly. • Show examples of what an error looks like. • Emphasize that the student is trying to help the partner improve skill, not improve just for a grade. • Younger students should have fewer items.

  7. Point System Scoring Guides • Similar to checklists but they award points for the various criteria on the list. • There’s no judgment of quality. • Offers students feedback based on points. • Teachers can add up points earned and convert to a grade. • More points can be awarded for a certain trait to provide some emphasis.

  8. Analytic Rubrics • Used with formative assessments. • Quantitative analytic rubrics require the scorer to give a numerical score. • Traits are evaluated for quality. • Use words such as never, sometimes, usually, or always. • Qualitative analytic rubrics provide verbal descriptions for each level of various traits to be evaluated. • Numbers are easier to remember than verbal descriptions.

  9. Holistic Rubrics • Consist of paragraphs written to describe various levels of performance. • Scorers decide which level best describes the overall quality of students’ work. • Faster to use than analytic rubrics. • Useful for assigning grades. • Used for summative evaluations or large-scale testing. • Give limited feedback to students.

  10. Generalized and Task-Specific Rubrics • Generalized rubrics are used to assess a variety of assessments. • Tend to look at big-picture characteristics. • Game play rubric looks at elements common to all games of same type (e.g., net games). • Task-specific rubrics contain criteria unique to the assessment. • Targets specific knowledge or behaviors teachers want to observe.

  11. Deciding How Many Criteria to Include • Simpler assessments require less complex rubrics. • Include all essential elements and hold the extras. • Descriptors, traits, characteristics are all terms used to indicate elements on a rubric. • Criteria must distinguish between useful indicators and genuine criteria (Wiggins 1998a). (continued)

  12. Deciding How Many Criteria to Include (continued) • Include all important components of performance. • Avoid details that are insignificant. • Write in language that the user can understand. • Take into account contextual variables. • Link to instructional objectives. • Reflect best practice or professional opinion. • Address every task and component of the assessment.

  13. Deciding How Many Levels to Write • A rubric should have a minimum of four levels. • Developmental rubrics are used for all grade levels and use several levels to show continuum of performances. • Might be used for statewide assessment. • Writing an even number of levels eliminates the temptation to select the middle score (Wiggins 1996).

  14. How to Create Rubrics • Identify what needs to be assessed. • Envision desired student performance. • Identify the main characteristics. • Develop descriptors for a quantitative rubric. • Pilot the assessment. • Develop levels for the rubric. • Create a rubric for students. • Administer the assessment. • Revise the rubric.

  15. Step 1: Identify What Needs to Be Assessed • What is the purpose of this assessment? • What am I assessing? • What information do I need from this assessment? • Review instructional goals. • If teachers fail to clearly identify content, purpose, and information the assessment is to provide, the resulting rubric will be deficient.

  16. Step 2: Envision the Desired Student Performance • Must decide what will be accepted as evidence that the objectives have been met. • Develop mental picture of expected student performance. • Helps establish a target for final product. • Having a clear picture of final expectations will make subsequent steps easier.

  17. Step 3: Identify the Main Characteristics • Brainstorm several words that could become descriptors for rubric. • Write short sentences or phrases that indicate student competency. • Evaluate entire list and group similar criteria.

  18. Step 4: Develop Descriptors (for Quantitative Rubrics) • Develop descriptors for levels that apply to all statements in quantitative rubric. • For example, the words sometimes, often, always, never describe various levels. (continued)

  19. Step 4: Develop Descriptors (for Quantitative Rubrics) (continued)

  20. Step 5: Pilot the Assessment and Rubric • Will help determine if correct descriptors were chosen. • Provides teacher with examples of students’ work for developing levels for qualitative or holistic rubrics. • Use rubric to provide feedback to students, not for a grade during the pilot. • Pilot is necessary for seeing if criteria were set at an appropriate level of difficulty.

  21. Step 6: Develop Levels (for Qualitative Rubrics) • Condense the list of descriptors. Group those alike and create essential categories. • Target approximately six characteristics. • Easiest if levels emerge from student products obtained from the pilot. (continued)

  22. Step 6: Develop Levels (for Qualitative Rubrics) (continued) Levels for a rubric should do the following: • Distinguish between best performance and least successful. • Use descriptors that provide a true distinction between performance levels. • Explain to students what is expected. • Focus on quality, not quantity. • Avoid comparison words such as more than, better, or worse.

  23. Step 7: Create a Separate Rubric for Students (for Qualitative Rubrics) • Only for qualitative rubrics. • Depends on the complexity of the assessment. • Separate rubric may need to be written more clearly for students without revealing information that is being assessed.

  24. Step 8: Administer the Assessment • Administer assessment to students. • Use it to evaluate students’ achievement. • Student rubric should be attached to assessment.

  25. Step 9: Revise the Rubric • Look at students’ performance. • Look for ways to strengthen the rubric. • Continue to revisit rubrics and criteria to refine and improve them.

  26. Special Considerations to Address When Creating Rubrics • Validity • Reliability • Transparency • Subjectivity Without attention to these items, the value of assessment can be jeopardized.

  27. Validity • Valid assessments measure what they intend to measure. • Performance-based assessments tend to have strong face validity. • Validity problems arise if the rubric includes the wrong characteristics or omits items.

  28. Reliability • Assessments are reliable when they consistently produce the same results. • Scores from different evaluators should be similar. • The more clear and simple the rubric, the greater the reliability (Wiggins 1996). • Drift is change of criteria between scorers. • Reliable rubrics prevent problems of drift because scorer has something concrete to reference.

  29. Transparency • Rubrics have to have enough detail so that evaluators can score reliably. • Often includes putting some answers in the scoring guide. • Teacher should create a separate scoring guide for students when the scorer’s rubric contains too much information.

  30. Subjectivity • Subjectivity is the amount of judgment used in assigning a score to a student’s test performance. • Subjective tests require judgment, analysis, and reflection on the part of the scorer (Fredrickson and Collins 1989). • Involves making decisions based on professional judgment. • Professional judgment is the more accurate term than subjectivity to describe evaluation of performance-based assessments. (continued)

  31. Subjectivity (continued) With training, people can use professional judgment to evaluate performance-based assessments in a valid and reliable manner.

  32. Rubric Guidelines • Use samples of students’ work. • Give rubrics with the assessment. • Have students rewrite the rubric. • Allow for multiple correct answers. • Limit the scope of the assessment. • Avoid hypergeneral rubrics. • Write levels of difficulty for cognitive assessments. • Adjust the rubric after the assessment has been scored, not during.

  33. Use Samples of Students’ Work • Exemplars are samples of student work that demonstrate acceptable or unacceptable performance. • Help explain why a score was assigned. • The scorer has concrete examples with which to compare other student’s work. • Anchor is a student’s work that gives evaluators something to ground their evaluations on.

  34. Give Rubrics With the Assessment • Give students criteria that evaluate the assessment when the task is presented. • Help students understand teacher expectations. • Student achievement is greater.

  35. Have Students Rewrite the Rubric • Discussing rubrics helps students to clarify what is expected. • Have students develop checklists to help understand the expectations. • Students should not be expected to write the rubric for complex assessments.

  36. Allow for Multiple Correct Answers • Teachers must be willing to accept alternative interpretations and performance within the scope of the criteria established. • Rubrics should never mandate the process, format, method, or approach that students are to use to complete the assessment (Wiggins 1998a).

  37. Limit the Scope of the Assessment • Rubric must capture the range of correct responses without allowing just any answer to be appropriate. • If the rubric is complex and too difficult to use, the assessment should be narrowed.

  38. Avoid Hypergeneral Rubrics • If the criteria is too general to distinguish the levels, the rubric is of little value. • They don’t help students learn and are susceptible to poor reliability by the person using it to evaluate (Popham 2003). • The unacceptable level is where teachers struggle with hypergeneral descriptions.

  39. Writing Levels of Difficulty for Cognitive Assessments • Teachers must address each level when writing a rubric. • The more levels written, the smaller the difference between levels. • Do not have more levels than necessary. • Use students’ work to help determine the number of levels needed.

  40. Adjust the Rubric After the Assessment Has Been Scored, Not During • Even if omissions or errors have been made on the rubric, continue to evaluate using the criteria given to the students. • Adjust the rubric after the assessment and make any necessary changes for future use. • Changes in the rubric must also be addressed in the criteria given to students for a specific task.

More Related