280 likes | 287 Views
Recap of recent assessment actions, review of writing and critical thinking data, key conversations and suggestions for improving student learning. Analysis of assessment checklist, MCQ assessment, and changes in writing assessment methods. Discussions on critical thinking, information literacy, writing indicators, and historical understanding. Examination of multiple choice quiz data and insights on critical thinking development among students.
E N D
Assessment Day 2018 Humanities
Agenda 1. Recap of ASMT Actions (Travis) 2. Review MCQ Writing Data (John) 3. Review Common Writing Assessment Data (Travis) 4. Key Conversations • Gaps? High school vs. Race (Travis, John, Ana) • Candidate courses for ASMT – Myth, PHI, Ethics? • Critical Thinking Modules? (Travis’s PPT, Areej’s PPT, Ralf?) 5. Plan to improve student learning
Before We Leave… [?] Course leaders by campus [?] Which courses to plan to assess [?] Critical Thinking module – volunteers [?] Some feedback on gaps
Current Assessments Checklist and MCQ. • Purpose of the Checklist assessment? • Brief Intro • Purpose of the MCQ assessment? • Brief Intro
Changes and Results Change 1: fewer papers (since 2017) • One goal is to work smarter rather than harder. Our data is limited. Is it significant? (Rule of Thumb: 30 data points if random.) This actually gave us far more data points to work with this year. Change 2: new communication plans • We requested 154 papers. 89 of them were submitted. (On West, there were 38 requests, 29 submissions; 6 were Non Submissions, and 3 were Drops, so all papers were accounted for. The 89/154 doesn’t tells us much.) 78 of those papers were assessed by two faculty members. So, 88% of submitted papers were assessed. Change 3: formalize prompts from essays • 92% (up from 86%) of assessed papers included the assignment; 97% (up from 94%) of the papers include the directions.
Reminder on Scale Two-Tiered Response System 0/1 = no 2/3 = yes n/a = data disregarded Two Readers A: Assessors agree on # B: Assessors agree on category but not # C: Assessors disagree on category
Overview 92% of papers submitted included the assignment. 97% of papers submitted included the directions. Employing the same methods as last year, we had much greater data available (partially because of higher participation and partially because of more agreements on ratings).
Conversations AreejZufari has made available a PowerPoint on Critical Thinking (for students). The LOLs have produced a PowerPoint on Critical Thinking that is available (for assessing). Others (e.g., Ralf Jenne) have mentioned an interest in (working on) a full module on Critical Thinking. [?] So, who is interested in participating in creating a PPT or a module on critical thinking, directed at students? [?] What would be a reasonable goal to accomplish by next assessment cycle?
Conversations The LOLs have produced a PPT on Information Literacy available. We had a good talk at a previous ASMT Day regarding Information Literacy. That seemed to help the numbers last year. [?] Can we agree that acceptable assignments must ask students to use sources? [?] Can we agree that all directions must specify that students must document those sources?
Conversations These are the best overall numbers we’ve seen in three years on the writing indicators. Logic and Organization was the biggest driver of the increase. [?] Can a simple “thesis check” help students more effectively structure their paper? [?] Can discussing aligning topic sentences with a thesis help more?
Conversations These are probably the most distressing of the data. Over the past two years, only half of the students have identified specific historical events in their papers. Less than half of those students have used the data to effectively used those events in their analysis. [?] Can we agree that an acceptable paper assignment must ask students to do both these things?
MCQ 2018 Data Total Responses: 1088 Pre-Course 858 Post-Course (~3600 invites) 4+ minutes: 742/550 Match: 248 29 high-risk* 44 low-risk * According to the Valencia Value
Response Rate 4+Min 2016: 23%/6% 2017: 22%/15% 2018: 20%/16% Match (Students who took both the pre- and post-course test): 2016: 4% 2017: 7% 2018: 7%
Correctly Answered Questions # / 15 Questions correct
Explanations? • We don’t know. • Faculty are more intentional about discussing critical thinking. • Students are showing up better prepared to engage in critical thinking. • Students are more intentional about thinking about critical thinking. • Other candidates?
Candidate Explanation 2 (and 4) From Post-Course MCQ: Did your professor spend course time developing your critical thinking skills by appeal to evidence, bias, and context? Among the 248 "matched" students: 232 Yes (94%) 16 No (6%) N.B. Both groups had a mean 9.6 correct on post-test. Were you exposed to a PPT that introduced critical thinking by appeal to evidence, bias, and context? 173 Yes 70% 9.5 mean (Post) 75 No 30% 9.9 mean (Post)
Climbing Mean 2016 match post-test: 7.8 2017 match post-test: 8.2 2018 match post-test: 9.6
Low-Risk vs. High-Risk? LOW: pre-course: 9.0 / post-course: 9.4 n = 44 HIGH: pre-course: 8.8 / post-course: 9.6 n = 29
Questions to Discuss? • Q3: Voltaire • Q8: Baroque • Q14: Medieval Christianity
Candidates for Assessment Mythology PHI 2010 PHI 2600
Other Gaps We’ve looked at data by low-risk and high-risk schools. HUM 2220 – Fall 2016 (18th most popular course) African American males: 67.7% success rate (14% fail, 10.8% W) Caucasian females: 81.4% success rate (6% fail, 9.3% W) HUM 2310 – Fall 2016 (16th most popular course) African American males: 67.6% success (8.8% F, 7.4% W) “Other” females: 83.6% success (7.2% F, 3.9% W) HUM 1020 – Fall 2016 (7th most popular) African American males: 64.1% success (14.8% F, 11.4% W) “Other” females: 92.6% success (1.9% F, 2.8% W)
What’s the… Most important change we could make in the learning process by this time next year? Most important change we could make in assessment to try to make our data more meaningful?