1 / 55

Building Rubrics For Large-scale, Campus-wide Assessment

Building Rubrics For Large-scale, Campus-wide Assessment. Thomas W. Zane tom.zane@slcc.edu Diane L. Johnson djohnson@new.edu Jodi Robison jrobison@new.edu. Housekeeping. Presenters Thomas W. Zane Diane L. Johnson And you are? Locations Breaks . Workshop Process Workbooks

teo
Download Presentation

Building Rubrics For Large-scale, Campus-wide Assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Building Rubrics For Large-scale, Campus-wide Assessment Thomas W. Zane tom.zane@slcc.edu Diane L. Johnson djohnson@new.edu Jodi Robison jrobison@new.edu

  2. Housekeeping • Presenters • Thomas W. Zane • Diane L. Johnson • And you are? • Locations • Breaks • Workshop Process • Workbooks • Materials • Symbols

  3. Agenda • Morning – Learning and Exercises • Foundations of large-scale rubrics • Rubric design • Criteria • Scales • Descriptors • Graders notes and exemplars • Review methods • Afternoon Practice – 3 Methods • Select rows from existing rubrics (Critical Thinking) • Adapt from existing rubrics (Communications) • Select criteria (Build from Common Criteria)

  4. Differences Between Classroom vs. Large-scale Rubrics • Stringency of the development rules • The types of information found within and attached to the rubric.

  5. 1.0 Foundations

  6. 1.1 & 1.2 Parts of a Rubric • Score Scale • Criteria • Descriptors • Instructions • Scoring Rules • Graders’ Scoring Notes & Exemplars

  7. 1.3 Holistic & Analytic Rubrics • Holistic • One big decision about submission as a whole • Integrated (complex) scoring descriptions • Analytic • Many decisions • Row-by-row scoring on various criteria

  8. 1.3 Analytic vs. Holistic Rubrics • We have learned that analytic rubrics: • Offer far better feedback to students • Provide better data for making curricular decisions • Improve reliability across graders • Are easier to train and use • Can take LESS time to score • So are far better suited to supporting our high-volume grading efforts

  9. 1.4 Why Use Rubrics? • Assess progress, ability, etc. • Communicate • Improve learning • Increase reliability of scoring • Link to standards & course objectives • Reduce noise • Save time • Set and/or clarify expectations

  10. 1.5 Benefits of Rubrics • Support Student Learning and Success • Support and Inform Teaching Practice • Support Specific Pedagogical Strategies • And oh, by the way… • That same data from the classroom can then be used to satisfy nearly every call for accountability measurement!

  11. 1.6 Potential Shortcomings of Rubrics • Time (To Build) • Aggravation • Effort

  12. 1.7 High to Low Inference Rubrics • High Inference – The process of coming to a conclusion or judgment based on known or assumed criteria. • Low Inference – Rubric defines more precisely what the evaluator is to detect in the performance. Low High Green zone is optimal for analytic rubrics.

  13. 1.8 Span of Submissions • General – Many submissions • Task Specific – One submission General Task Specific

  14. 1.9 Content and Trait Based • Content-based Rubrics –what the student had to say) • Trait-based Rubrics –how well the student said it)

  15. 1.10 Educative vs. Scoring • Educative rubrics tend to be formativeand written for student use. • Scoring rubrics tend to be summativeand are written in great detail to inform scoring.

  16. Why Use a Hybrid Rubric? E-portfolio Assessment A sample of all signature assignments. Program Assessment Classroom data aggregated here. Classroom Assignments Almost every assessment measure starts here.

  17. 1.11 Usability and Flow • A rubric should be quick and easy to use. • Graders should spend time with their eyes on the student submission or performance rather than on a long, complex, or difficult to follow rubric. • Good flow depends on clarity, brevity, white space, a small number of rows and columns, concrete descriptors, and good organization of criteria.

  18. Need a Break?

  19. 2. Should I Use a Rubric? • 1. Is a rubric the correct tool to use? • Constructed response or performance? • Require more than one correct answer? • Gradations of quality? • 2. What type of rubric design would be best? • Who will grade? • Will there be usage limitations?

  20. 2.1. Identify Overall Purpose of the Assignment and Score(s) What are the broad goals of the program/assignment and where does the measure fit into the program? • What is/are the purpose(s) for the scores? • What decisions might be made? • What actions might be taken? • What consequences may result? • What does “success” look like?— • Academic? Real-world? Both?

  21. 2.2 Targets of Measurement • Person • Product • Performance • Process • Impact

  22. 3.0 Criteria

  23. Criteria selection is the most important decision of the entire rubric creation process. Criteria selection answers the question: What matters?

  24. General Rules for Criteria Development • Use real-world standards and human judgment. • Collaborate with peers. • Dig deeply into the construct of interest. • Select hard to measure criteria rather than settling for what is easy to count. • Consider the criterion breadth on each criterion row. • Search for mutually supportive rather than mutually exclusive criteria. • If working with trait-based rubrics, try to agree on what quality means across sections of a course, multiple courses, and across the entire campus.

  25. 3.1 Types of Criteria • Quality • Depth or Breadth • Relevance or Adequacy • Impact • Accuracy • Physical Attributes (rarely used)

  26. 3.2 Methods for Finding Criteria(Easiest to hardest methods.) • Draw from existing rubrics • Select from common, generic, or universal criteria • Industry or discipline based standards and learning outcomes • Break down the constructs from scratch

  27. 3.3 Method 1: Select from Preexisting Rubrics • Search for existing rubrics that identify criteria. • Look for specific aspects and exemplars of each criterion. • Don’t adopt it just because it is in print. • Provide attribution.

  28. 3.3 Method 2: Select from Generic Criteria • Scan the Generic Criteria list. • Select 4-12 criteria that matter based on your purpose and values.

  29. 3.3 Method 3: Draw from Standards & Outcomes • Search standards and outcomes instead of rubrics. • Pull valued criteria out for measurement.

  30. Notes About Using Standards and Outcomes • Standards are frequently written as “values” or broad “goals”. • Outcomes or Curriculum Objectives often resemble behavioral objectives. • Real-world Outcomes are wonderful, but may be listed as statements. • Search for concrete definitions of success.

  31. 3.3 Method 4: Build From Scratch • Research the constructs of interest (critical thinking, reflection, life-long learning, etc. • Brainstorm • Deconstruct the larger constructs into measurable parts

  32. Cautions When Selecting Attributes For Your Rubric • Rarely use physical appearance. • Don’t measure simple content knowledge. • Avoid progress or gain over time. • Affective measures are extraordinarily difficult. • Don’t expect written responses to measure everything.

  33. 3.5 Criteria Selection Rules • Actionable • Authentic • Valued • Important • Aligned • Sized • Clear • Complete • Supportive

  34. 3.6 Order the Criteria • By the chronological order graders will encounter each criterion within the submission or performance. • By the order of the performance. • By criteria order that are familiar to the graders. • By logical grouping. • In order of cognitive complexity.

  35. 3.7 Define Each Criterion • Briefly define the meaning of the criteria on each row of the rubric table. • Good criteria definitions make it relatively easy to see when the descriptors across each row contain elements that are inside or outside the intended measure.

  36. 3.8 Check for Construct Irrelevant Variance (CIV) • Overrepresentation • Extra criteria in the rubric. • Nice to measure, but not directly related to the purpose of the rubric. • Underrepresentation • Missing criteria that should have been included in the rubric. • Things that were critical to measure but were missed.

  37. Need a Break?

  38. 4.0 Scales

  39. 4.1 Examples of Scales

  40. 4.2 Rules for Scoring Scales • Scoring scales must reflect the purpose of the rubric • Define the discernible quality levels in student performances • Use as few scoring levels as you can to get the job done --- (because as the number of columns increases, grading costs can increase proportionally)

  41. 5.0 Descriptors

  42. 5.1 Make Descriptors Observable and Measurable • Graders need to be able to see, hear, taste, smell, or otherwise perceive characteristics of student performance and then apply a score to the observation.

  43. 5.2 Differentiation between Descriptors • We use three absolute rules: • Use brief, clear, and concrete descriptions of observable performance. • Use observable conditions/indicators to help differentiate one scoring level from another. • Do not use comparative language (e.g., good, better, best) to differentiate scoring levels.

  44. 5.3 Unidimensional vs. Multidimensional Scales • Unidimensional scales – changes across row are the degree of a single criterion. • Guard against breaking the row down into minutiae – thus losing the integrated whole.

  45. 5.4 Descriptive vs. Comparative Scales • Descriptive descriptors describe discernible deeds! • Describe performance rather than make judgments about it.

  46. 5.5 Qualitative vs. Quantitative Scales • Ensure that your rubric measures gradations of quality rather than counting characteristics of quality. • More is not usually a strong surrogate for better. • Simple Rule: Use counting 0 to 1% of the time! ;-)

  47. 5.6 Level of Detail • Just enough detail - • Offers students some guidance for the assignment. • Provides definitions of success. • Helps graders score consistently. • Provides meaningful feedback. • Supports better stakeholder score interpretation.

  48. 5.7 Writing The Descriptors • Begin with the passing descriptor - • Then create the bottom of the scale - • Now the top of the scale - • Finally, work on the just under passing descriptor -

  49. 6.0 Graders Notes & Exemplars

  50. 6.1 Other Decisions: Operations and Training • Procedure for retrieving and scoring performances • Procedure for accessing and using graders notes • Rules governing anonymity, confidentiality, etc. • Procedures for marking and completing rubrics • Rules governing feedback • Rules for decision making (pass/fail, revision needed) or computing a score (includes weighting) • Procedures for reporting problems

More Related