1 / 20

An Institutional Writing Assessment Project

An Institutional Writing Assessment Project. Dr. Loraine Phillips Texas A&M University Dr. Yan Zhang University of Maryland University College October 2010. Agenda. Why a Writing Assessment Project? Data Source of the project College Participation Assessment Rubric

howard
Download Presentation

An Institutional Writing Assessment Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Institutional Writing Assessment Project Dr. Loraine Phillips Texas A&M University Dr. Yan Zhang University of Maryland University College October 2010

  2. Agenda • Why a Writing Assessment Project? • Data Source of the project • College Participation • Assessment Rubric • Scoring of the papers • Inter-rater agreement • Results • Discussion and future of the project.

  3. Why a Writing Assessment Project? • Undergraduate core competencies were established for Texas A&M graduates. • Effective communication skills are crucial to student success. • Using this project to provide evidence of the quality of student writing allows participating faculty to understand their students performance more comprehensively. • The project includes the Office of Institutional Assessment in conjunction with the University Writing Center.

  4. Data Source • The data in this writing assessment project were student papers from: • Upper-Division • Capstone • or Upper-Division “W” (writing intensive) courses. • Assignments were approximately 1-20 pages in length. • Assignments that appealed to a general academic audience were preferred. • Examples given included: persuasive or argument papers, summary papers, analysis papers, letters or correspondence, lab or other reports, and case studies.

  5. Breakdown of College Participation

  6. Assessment Rubric • The writing assessment rubric was developed in conjunction with the University Writing Center, the assessment liaisons, and the Core Curriculum Council. • The rubric was designed to promote validity, uniformity, and consistency in the grading process. • The assessment rubric was categorized into four specific criteria to help manage grading. • After feedback on the rubric from the project pilot, the rubric was adapted to the following figure.

  7. Writing Assessment Rubric

  8. Rater Calibration and Scoring • All identifiable information for students and faculty was redacted from the papers. • All day grading sessions were conducted with Dr. Valerie Balester, Executive Director of the University Writing Center, and Dr. Candace Schaefer, Associate Director of the University Writing Center, serving as facilitators. • Faculty members were calibrated by paper genre for the scoring session. • The scoring sheet is provided in the following slide.

  9. Writing Assessment Project Scoring Sheet • The grading was done by faculty members of the institution from across disciplines. • Grader participation is included in the following slide.

  10. Breakdown of Grader Participation by College

  11. Interrater Agreement • Each writing assessment assignment was scored by two independent graders, with a third if large disagreement. • Interrater agreement was judged to be statistically substantial (.624). • As the intraclass correlation coefficient (6.24) approaches 1.0, there is less variance within item ratings.

  12. Interrater Agreement (Continued) • The rate at which two graders agreed on an assessed item by giving that item the same score was reviewed. • Simple agreement between raters on the scores of the items assessed showed a descriptive mean of .676. • Thus, approximately 67% of the time, two independent graders assessed an item and then scored that item the same value.

  13. Results • As previously noted, the scoring of each category was on a scale of 1 to 3 (3 being highest quality). • The following table displays the university averages based on the departments that participated. • Each category scored has a mean and standard deviation.

  14. Texas A&M University Overall Writing Scores (Scale of 1-3) 2009-2010

  15. Texas A&M University Overall Writing Scores (Scale of 1-3) 2008-2009

  16. Strengths of the Project • Faculty engagement and participation • Cross-disciplinary approach • Helps faculty define student writing quality • Helps faculty calibrate expectations for the quality of student writing

  17. Challenges for the Project • Getting the papers! • Representative sample • Calibrating faculty • Long day of scoring—stay nourished!

  18. Discussion and Future of the Project • As a component of Vision 2020, the Academic Master Plan highlights effective communication as a necessary student ability. • Participating departments can take the information given from this project to better understand the performance of their students. • Steps have also been taken to assess areas of potential improvement and enhancement of this project. • Consider VALUE Rubrics from AAC&U

  19. February 20-22, 2011 College Station, Texas • http://assessment.tamu.edu/conference Plenary Speakers: Dr. Carol Geary Schneider Dr. Peter Ewell Call for Proposals now open!

  20. One Minute Evaluation • What was the most valuable thing you learned? • What is one question that you still have? • What do you think is the next step that your program needs to take?

More Related