360 likes | 498 Views
What determines student satisfaction with university subjects? A choice-based approach. Twan Huybers, Jordan Louviere and Towhidul Islam. Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014. Overview. 1. Introduction (student perceptions of teaching)
E N D
What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014
Overview 1. Introduction (student perceptions of teaching) 2. Study details (study design, data) 3. Findings (ratings instrument, choice experiment) 4. Conclusion
1. Introduction Higher education practice: widespread use of student perceptions of subjects/teaching Scholarly research, (contentious) issues: formative vs. summative; effects of grades, class size, etc.; teaching “effectiveness”/”quality”; value of student opinions Student satisfaction • Student as a customer? • Satisfaction vs. effectiveness • Overall summary item in evaluation instruments (incl. CEQ)
Contribution of study is methodological: Use of DCE vs. ratings method (response styles) Use of DCE in student evaluation: • NOT an alternative to classroom evaluation exercises (although BWS Case 1 could be) • Instead, DCE as a complementary approach
2. Study details Evaluation items used in the study: • Wording of 10 items derived from descriptions in Australian university student evaluation instruments • Subject and teaching of the subject
Evaluation items used in the study: • Wording of 10 items derived from descriptions in 14 student evaluation instruments • Covers subject and teaching of the subject • Possible confounds in descriptions Teaching and learning, Methods and activities Reflects evaluation practice Same for ratings and DCE
Two survey parts: • Evaluation instrument (rating scales) (“instrument”); and • Evaluation experiment (choices in a DCE) (“experiment”) We controlled for order of appearance of the instrument and the experiment, and for respondent focus: 4 versions of the survey in the study design
PureProfile panel: 320 respondents randomly assigned to the 4 study versions, December 2010 Participant screening: • student at an Australian-based university during previous semester • completed at least two university subjects (classes) during that semester (to allow comparison between at least two subjects in the instrument)
Instrument: • Names of all subjects in previous semester • Most satisfactory (“best”) and least satisfactory (“worst”) subject nominated • Each attribute for the “best” and “worst” subjects rated on a five-point scale: -2 to +2 (SD, D, neither D nor A, A, SA)
Experiment: • Pairs of hypothetical subjects described by rating scale categories as attribute levels (range -2 to +2) Ratings assumed to be own ratings • Each participant evaluated 20 pairs: 8 pairs: OMEP from 410 (8 blocks from the 64 runs) 12 pairs: OMEP from 210 (all 12 runs) • 4-level OMEP: -2, -1, +1 and +2 • 2-level OMEP: -2, +2 • Subject A had constant, “neutral” ratings descriptions • Subject B ratings as per the above experimental design
3. Findings ANOVA: equal means for the three study versions, so pooled Binary logistic regression: Best (1) and Worst (0) subjects as DV, items ratings as IVs
Instrument, best vs. worst subject: • Four items discriminate One item with counter-intuitive sign • High correlation between ratings (for Best, Worst and Best minus Worst)
Experiment • Responses from 12 individuals deleted (always chose A or always chose B) • Mean choice proportion for each choice option in each pair, for each of the three study versions (for common set of 12 pairs): high correlation with sample proportions (≈0.94) → study versions pooled
Conditional binary logit estimation • First: 4-level linear vs. 4-level non-linear (effects coded): LR test: no stat. difference, so 2-level and 4-level designs pooled • Cond. Logit for all 20 pairs of 228 respondents • Model fit and prediction accuracy (in-sample, out-of- sample): Comparing, for each choice option in each pair, the mean choice proportion with the predicted choice probability
All item parameter estimates discriminate re satisfaction Most important to student satisfaction: ‘the subject was challenging and interesting’ closely followed by ‘the teacher communicated and explained clearly in face-to-face, online, written and other formats’ Some results similar to Denson et al (2010) (final ‘overall satisfaction’ item in SET instrument as DV explained by subject ratings), in particular: the “challenging and interesting nature of a subject” (most important) and the “opportunities for active student participation” item (least important)
Instrument vs. Experiment (approximation): R2 of parameter estimates = 0.18 Overall: experiment better distinguishes the relative contribution of items, i.e. better “diagnostic power” Note: higher number of observations in experiment
Scale-Adjusted Latent Class Models (SALCM) • Identifying preference heterogeneity (co-variates) and variance heterogeneity simultaneously • BIC used for model selection SALCM for 12 common pairs (2-level): • One preference class • Two scale classes: male students more variable in their choices than females SALCM for master set of 64 pairs (4-level): Similar results
SALCM, 12 common pairs, choice proportions vs. choice probabilities
SALCM, master pairs, choice proportions vs. choice probabilities
Individual-level model using WLS Empirical distribution of individual-level item parameter estimates Using 12 pairs from common design Small size for estimation Quite a few negative parameter estimates
Close correspondence between the results of Cond. Logit, SALCM-64 pairs (4-level), WLS and (slightly less so) SALCM-12 pairs (2-level)
4. Conclusion Ratings instrument and choice experiment to establish individual contributions of subject aspects to satisfaction The experiment provided greater discriminatory power ‘Challenging and interesting’ and ‘Teacher communication’ major drivers of satisfaction ‘Feedback’ and ‘Student participation’ among the least important ones
Methodological contribution to higher education literature Novel application of DCE to student evaluation Combine quantitative results with qualitative feedback Limitations/further research Relatively small sample size Potential confounding in items Application at university program level
What determines student satisfaction with university subjects? A choice-based approach Twan Huybers, Jordan Louviere and Towhidul Islam Seminar, Institute for Choice, UniSA, North Sydney, 23 June 2014