240 likes | 382 Views
Comparison of Student Evaluations of Teaching Between Online and Face-to-Face Courses. Dr. Hank Kelly hkelly@ohiochristian.edu (740) 420-5924. Research Question.
E N D
Comparison of Student Evaluations of Teaching Between Online and Face-to-Face Courses Dr. Hank Kelly hkelly@ohiochristian.edu (740) 420-5924
Research Question What are the differences in student evaluations of teaching (SET) between online and face-to-face (F2F) courses as evidenced by a thematic analysis of open-ended questions?
Purpose Determine if a SET bias exists (i.e., delivery method affects evaluation independent of instructional effectiveness)
Need • SET need to be reliable, valid, and accurate because they are frequently used for high-stakes summative evaluation decisions about faculty: promotion, tenure, merit pay
Need • Literature indicates possible SET bias against online instruction compared to F2F instruction • Lower SET for online compared to F2F instruction but no differences in learning • No difference in SET between online and F2F instruction but higher learning in online class • Lower SET for online education (but no assessment of learning)
Need • If such a bias exists, SET of online courses cannot be equitably compared with those of F2F courses • Online education is increasing • Forces driving higher education to consider quality and satisfaction • Increased competition • Growing dissatisfaction with higher education in America and demand for accountability
Setting • Comprehensive university; offers bachelor’s, master’s, doctorate, and first professional degrees; only 7% undergraduate • Located in southern US • 3,443 enrollment • Majority of students are women (56%), enrolled part time (58%), and aged 25 and above (88%)
Methodology Content analysis, using qualitative research techniques, of responses to open-ended questions in SET of online and F2F courses • Thematic analysis according to two sets of categories: appraisal and topical (Braskamp et al., 1981; Rovai et al., 2006) • Evaluation of frequency counts • Detailed description
Appraisal Categories • Praise • Constructive criticism • Negative criticism
Instructor Attitude Rapport Person Knowledgeable Stimulation Ability Preparedness Helpfulness Teacher overall Course Organization Content Materials Workload Lecture Discussion Assignments Overall course Grading Grading fairness Grading timeliness Feedback quantity/quality Topical Themes & Categories
Population, Sample, & Participation • Taught during academic year 2004-05 • 41 instructors throughout university: one online and one F2F section of same course • 866 students took the 82 class sections: 43% F2F, 57% online • 534 completed SET: 62% overall response rate, 60% F2F, 63% online • 1,742 distinct text segments: 43% F2F, 57% online
Findings - Thematic Analysis • Praise: 52% of text segments • Constructive criticism: 29% • Negative criticism: 19% • No significant difference between appraisal categories by delivery method, Pearson c2(2, N=1,742) = 1.49, p = .47
Findings - Thematic Analysis • Course theme: 80% of text segments • Instructor theme: 15% • Grading theme: 5% • Significant difference between topical themes by delivery method, c2(2, N=1,742) = 11.06, p < .01 • Significant difference between topical categories by delivery method, c2(19, N=1,742) = 38.31, p < .01
Findings - Thematic Analysis • Greater proportion of text segments (F2F): • In instructor theme • Containing praise in instructor theme • In person and knowledgeable categories • Containing praise in person category • Greater proportion of text segments (Online): • In course theme • Containing praise in course theme • In materials category • Containing praise in organization and materials categories
Findings - % of Text Segments by Theme and Delivery Method F = F2F O = Online
Findings - % of Text Segments by Category and Delivery Method F = F2F O = Online
Findings - MANOVA Multivariate analysis of variance of responses to closed-ended questions • For comparison: quantitative analysis is common in literature • Dependent variables (for each class): • Mean overall evaluation of instructor • Mean overall evaluation of course • Independent variable: delivery method (2 levels: online and F2F)
Findings - MANOVA • No significant difference between overall evaluations of instructor and course by delivery method, Pillai’s Trace = .03, F(2,79)=1.15, p = .32 • MANOVA appropriate because of intercorrelation between dependent variables (r = .80)
Study Limitations • Self reported data, subject to reporting bias • Subject to nonresponse bias • No external validation of instructional effectiveness such as student learning; only perceived learning reported • Primarily graduate students • Only online distance education students • Only one institution studied
Implications for Future Research • Interview participants • Additional classes, instructors, years, institutions • Differences in frequency of responses by factors (e.g., adjunct vs. full-time faculty, male vs. female student, student age, academic discipline) • Online learning process/techniques to create optimum online learning experience
Implications for Practice • Online students value organization: course design should provide clear guide through learning activities(Palloff & Pratt, 2001) • Online students value instructional materials: carefully select; online instructors are no longer considered “repository of knowledge”(Sherry & Wilson, 1997)
Implications for Practice • Differences between delivery methods: faculty role changes (Coppola et al., 2002), change instructional design and delivery, need for faculty training and instructional support (Howell, Saba, Lindsay, & Williams, 2004) • Establish clear grading criteria/standards (rubrics); apply equally to everyone (Walvoord & Anderson, 1998)
Implications for Practice • SET is reliable, valid, and useful for improving instructional effectiveness • Responses to open-ended questions address wider range of instructional dimensions than closed-ended questions • Open-ended questions should try to elicit responses in all three topical themes: instructor, course, and grading
More Details Available Kelly, H. F. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. ProQuest Digital Dissertations (AAT 3213072). Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. The Internet and Higher Education, 10(2), 89-101.