1 / 27

Different Perspectives on the Assessment Mandate: The Results of a Survey

Different Perspectives on the Assessment Mandate: The Results of a Survey . Neil Pagano Associate Dean Columbia College Chicago npagano@colum.edu. 139 th Belmont Stakes. Question Posted to Assess Listserv (2004).

lamya
Download Presentation

Different Perspectives on the Assessment Mandate: The Results of a Survey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Different Perspectives on the Assessment Mandate: The Results of a Survey Neil Pagano Associate Dean Columbia College Chicago npagano@colum.edu 2007 NASPA Assessment & Retention Conference

  2. 139th Belmont Stakes 2007 NASPA Assessment & Retention Conference

  3. Question Posted to Assess Listserv (2004) “Is there any evidence that a higher education assessment/evaluation of student learning program has indeed produced positive (or negative) change in the quantity or quality of what students actually learn. There seems to be a lot of anecdotal information about how assessment/evaluation programs were created and implemented, but not any actual empirical support. Considering the logistical and personnel-related ramifications of such an undertaking, any success in getting a program off the ground and moving is certainly noteworthy and to be commended. However, I am trying to prepare a report on the actual effectiveness of assessment/evaluation programs. Is there a program that is doing what it is purported to do: improving student learning. If so, is there any (weak, so-so, or solid) empirical evidence to this effect? Any good studies?” 2007 NASPA Assessment & Retention Conference

  4. Two Responses: • “Assessment done well is effective and assessment done poorly is not effective.” • Three possible explanations: • Assessment is still relatively new • Assessment is “decentralized or course-embedded” • “Faculty already do a darned good job teaching, and their assessment results simply document that.” 2007 NASPA Assessment & Retention Conference

  5. Prior Research • Peterson, Einarson, Augustine, & Vaughan (1999), Institutional Support for Student Assessment: Methodology and Results of a National Survey • Survey - ISSA: Purposes, Methods, Structures, & Impact • Preparing for self-study or accreditation (1st in importance) • Improving the achievement of undergraduate students (2nd in importance) 2007 NASPA Assessment & Retention Conference

  6. Conclusions… “Institutions do not routinely use student assessment data in internal decision-making or monitor its impact on important areas of institutional and student performance. Given the extensive claims made for the value of students’ assessment and the substantial human and financial resources invested in student assessment activities, institutions need to give greater priority to examining how student assessment data is used, and how it impacts the performance of individual students and the institution itself.” 2007 NASPA Assessment & Retention Conference

  7. Follow-Up Study • Peterson, Vaughan, & Perorazio (2002). Student Assessment in Higher Education: A Comparative Study of Seven Institutions • “Exemplary” institutions for “benchmarking” • Ten domains, including Initiating Conditions, Institutional Approach, Culture, and Utilization • Only one institution (Wake Forest University) used assessment results “extensively” 2007 NASPA Assessment & Retention Conference

  8. Research Questions • What are the reasons for undertaking assessment? • What assessment methods are used and which are valued? • How effective have these assessment efforts been? • What variables (institution-type, control, respondent position) impact responses to Qs 1, 2 & 3? 2007 NASPA Assessment & Retention Conference

  9. Survey: Four Sections • Purpose • Methods Used • Methods Valued • Effect of Assessment Efforts 2007 NASPA Assessment & Retention Conference

  10. Survey Distribution and Responses • Two Listservs: Assess (University of Kentucky) and Communities of Practice • Snowball Sampling for Further Coverage • 331 Total Completes 2007 NASPA Assessment & Retention Conference

  11. Limitations • Sampling Method not Random • Purposive method likely to recruit the “choir” • Mixture of Respondents • Some from same institution • Basic Statistical Analysis – ANOVA, T-Tests, and Chi Square 2007 NASPA Assessment & Retention Conference

  12. Survey Respondents by Institutional Type and Position 2007 NASPA Assessment & Retention Conference

  13. Purposes of Assessment by Institution Type - ANOVA 2007 NASPA Assessment & Retention Conference

  14. Purposes of Assessment by Position - ANOVA 2007 NASPA Assessment & Retention Conference

  15. Purposes: A Comparison to the ISSA 2007 NASPA Assessment & Retention Conference

  16. Assessment Methods Used Scale: 1 = not used; 2 = used in some areas; 3 = used in most areas; 4 = used in all areas 2007 NASPA Assessment & Retention Conference

  17. Assessment Methods Used by Institution - ANOVA 2007 NASPA Assessment & Retention Conference

  18. Assessment Methods Used by Institution – ANOVA (cont.) 2007 NASPA Assessment & Retention Conference

  19. Assessment Methods Valued 2007 NASPA Assessment & Retention Conference

  20. Methods Used vs. Methods Valued 2007 NASPA Assessment & Retention Conference

  21. Methods Valued by Institution Type - Chi Square • Associate Institutions placed less value in: • Alumni Surveys (p = .008) • Capstone Courses (p = .002) • Baccalaureate Institutions placed less value in Employer Surveys (p = .008) 2007 NASPA Assessment & Retention Conference

  22. Methods Valued by Position – Chi Square • Faculty placed relatively less value in 9 of the 12 Methods. Statistically significant differences in: • Departmental Exams (p = .006) • Student Papers (p = .001) • Student Portfolios (p = .016) • Capstone Courses (p < .000) • Commercial Exams (p = .046) • Student Interviews/Focus Groups (p = .023) • Faculty placed more value in Anecdotal Evidence (p = .132) 2007 NASPA Assessment & Retention Conference

  23. Perspectives on the Effects of the Assessment Mandate: 4 Survey Items 1. Our institutional assessment efforts have been effective. 2. Our institutional assessment efforts have identified areas where we need to make curricular/programmatic changes. 3. We have made curricular/programmatic changes as a result of our assessment. • It is important that every institution have an assessment plan. • No differences in Institutional Type or Control 2007 NASPA Assessment & Retention Conference

  24. Perspective on Effect of Assessment: by Position - ANOVA 2007 NASPA Assessment & Retention Conference

  25. Closing Comments • Institution Type Matters • Different institutions have different priorities and purposes • Position Matters • Faculty, Assessment Leaders and Administrators differ on purposes, methods valued, and the ultimate effect of the mandate 2007 NASPA Assessment & Retention Conference

  26. Closing Comments • Accreditation is an Important Lever • Effects of revised expectations • Need to Know More • US Higher Ed Post-Spellings Commission • What is “Assessment?” 2007 NASPA Assessment & Retention Conference

  27. Different Perspectives on the Assessment Mandate: The Results of a Survey Neil Pagano Associate Dean Columbia College Chicago npagano@colum.edu 2007 NASPA Assessment & Retention Conference

More Related