120 likes | 232 Views
Aligning VALUE Rubrics to Institutional Purposes:. Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert siefertl@uncw.edu Anne Pemberton pembertona@uncw.edu University of North Carolina Wilmington. Discussion Topics.
E N D
Aligning VALUE Rubrics to Institutional Purposes: Sharing Practical Approaches that Promote Consistent Scoring and Program Improvement Linda Siefert siefertl@uncw.edu Anne Pemberton pembertona@uncw.edu University of North Carolina Wilmington
Discussion Topics • Who is using VALUE rubrics or other meta-rubrics? • How are they being used? • What feedback is being gathered from the rubric users? • What procedures are being used to improve consistency of scoring? • What changes have been made at the institutions using the rubrics?
POLL • Are you using AAC&U VALUE Rubrics at your institution? • For General Education assessment • For assessment in the majors • Are you using other meta-rubrics at your institution? • For General Education assessment • For assessment in the majors
Inquiry and analysis Critical thinking Creative thinking Written communication Oral communication Reading Quantitative literacy Information literacy Teamwork Problem solving Civic knowledge and engagement Intercultural knowledge Ethical reasoning Foundations and skills for life-long learning Integrative and applied learning Which VALUE Rubrics do you use? www.aacu.org/VALUE/rubrics/index_p.cfm
Implementation Procedures (1) • How we are using the rubrics: • A number of VALUE Rubrics are aligned to our UNCW Learning Goals. These rubrics are used to score student work products from general education courses and senior capstone courses. • Courses with student learning outcomes that are aligned to the Learning Goals are selected that are representative of those taken by most students. Sections are selected by stratified random sampling. Students within sections are selected randomly. • A workshop is held to acquaint or reacquaint instructors with the rubric(s) prior to the beginning of the semester. Instructors select an assignment that they believe matches most or all dimensions of the rubric.
Implementation Procedures (2) • Scoring Workshop • Two-hour workshop prior to scoring • Scorers report that the training is adequate, yet a few say that they were not as prepared on the day of scoring as they thought they were. • Scoring is performed at an all-day or half-day session. • Scorers score the first work product from each packet together in pairs to recalibrate. • Additional work products are double scored to measure IRR.
Implementation Procedures (3) • How are you using VALUE or other meta-rubrics?
Feedback from Scorers • Feedback we’ve received: • Written communication rubric fits assignments well, requiring few assumptions. • Inquiry is approached differently across disciplines. Most of these differences fall into Design Process. Most scoring pairs needed to make assumptions about the process that was inferred from the assignment. At the “basic studies” level, Topic Selection was often determined to be not applicable. • Critical thinking has been the most difficult rubric to apply—some difficulty comes from the rubric, some from the assignments. A number of scorers said that the assignments needed to be better matched to the rubrics. • There were a number of comments concerning the need for faculty to provide more in-depth instructions for assignments. • Feedback you’ve received:
Feedback from Instructors • Feedback we’ve received: • All instructors to date have said that the assignment selection process was not difficult, which does not match with scorer feedback about some of the assignments. • One instructor said that more training would be beneficial. • This is an area that we will be working on. • Feedback you’ve received:
Interrater Reliability • We measure using Percent Agreement, Spearman’s Rho, and Krippendorff’s Alpha. • During first round, we met our benchmark on 3 of 15 dimensions, and were close on 3 more. • Meta-rubrics are more difficult to apply than assignment-specific rubrics. • How are you measuring, and what are your findings?
Changes to VALUE Rubrics • We have made one change: • Evidence dimension of Critical Thinking • Divided interpretation from questioning viewpoints of experts • What changes have you made to any of the VALUE rubrics? • To fit institutional mission and use • To improve consistent use
Changes Made to Instruction • We have started a UNCW Learning Goals series through the Center for Teaching Excellence to begin conversations about each learning goal. • Faculty are beginning to grapple with the difference between thinking and critical thinking. • What changes are being made at your institutions?