1 / 31

Statistical Analysis of Peer Opinions in US Engineering Graduate Programs Rankings

This research focuses on the impact of peer assessments in US engineering graduate program rankings, aiming to guide universities on improving peer impressions. Through statistical modeling, determinants of peer rankings are explored, utilizing data from the 2018 USNews evaluations. The study employs a methodological approach that considers uncensored regression and unobserved heterogeneity, along with estimation results from analyzing various variables affecting peer assessment scores.

luanar
Download Presentation

Statistical Analysis of Peer Opinions in US Engineering Graduate Programs Rankings

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Assessment of Peer Opinions in Higher Education Rankings: The Case of U.S. Engineering Graduate ProgramsinJournal of Applied Research in Higher Education Amir Ghaisi Grigorios Fountas Panos Anastasopoulos Fred Mannering

  2. Importance of University Rankings • Students seek admission to highly ranked universities • Universities use rankings to attract the best students and faculty, instill alumni pride, improve fundraising, etc. • University funding from governmental and private entities can be influenced by rankings • Because of the above, universities develop policies to support and improve their rankings

  3. USNews rankings • Easily the most widely followed university rankings in the U.S. • Rankings are based on quantitative data (research expenditures, number of faculty, test scores of students, etc.), peer and recruiter opinions, and predetermined weightings • In USNews rankings of engineering graduate programs peer opinions account for 25% of the total ranking score

  4. Peer assessments • Individuals from universities (typically engineering deans) are asked to rate the quality of specific programs on a scale from 1 (marginal) to 5 (distinguished) • Peer scores a function of experiences with faculty and graduates of the schools they are evaluating, the school’s overall reputation, etc. • Peer impressions may also be influenced by their exposure to past rankings, and by the factual information used to support those past rankings

  5. Objective of this Research: • Provide insight into the determinants of peer rankings in higher education by developing a statistical model of the average peer assessment scores of U.S. colleges of engineering from the 2018 Engineering Graduate program evaluations provided in USNews • Findings will provide help guide university policies that may be used to improve peer impressions

  6. Methodological Approach • Engineering graduate programs are peer-rated on a scale of 1 (marginal) to 5 (distinguished), and USNews provides the resulting average peer assessment scores in tenths (such as 2.6, 2.7, etc.) • Thus the peer score is reported as average that is continuous variable bounded by 1 and 5 • This suggests a lower and upper censored Tobit regression would be appropriate, but statistical estimations indicated an uncensored regression was sufficient since few observations are near the censored boundaries

  7. Methodological Approach (cont.) Using a standard uncensored regression: • where, yn is the dependent variable (average graduate program peer assessment score for university n, • βis a vector of estimable parameters, • Xn is a vector of explanatory variables for university n, • εn is a normally and independently distributed error term with zero mean and constant variance σ2.

  8. Unobserved Heterogeneity • To account for heterogeneity (unobserved factors that may vary across universities), we allow for the possibility that each university may have its own parameter for one or more of the explanatory variables • Estimable parameters are thus written as: • where β is the mean of the parameter estimate and φn is a randomly distributed term (for example a normally distributed term with mean zero and variance σ2)

  9. Unobserved Heterogeneity • We also allow for the possibility for the mean and variance of the parameter to be a function of explanatory variables so (Mannering, 2018) • Models are estimated using simulated maximum likelihood methods using 1,000 Halton draws for numerical integration • Observation-specific mean parameters estimated using the simulated Bayesian approach proposed by Greene (2004)

  10. Data • peer assessment scores are gathered from USNews’ 2018 rating of U.S. engineering graduate programs • USNews-related data provided with their 2018 graduate program rankings including: - Engineering graduate student enrollment; - Total number of engineering faculty; - Number of faculty in the National Academy of Engineering; - Average math Graduate Record Exam score of students; - Percent of engineering applications accepted; - Number of doctoral students graduated in the past year; - Research expenditures per faculty

  11. Data (cont.) • Additional data: - University membership in the American Association of Universities (AAU); - Whether the university is public or private; - National merit scholars admitted, - Incoming students’ average SAT scores, - Number post-doctoral appointees; - Citation information of engineering faculty and university faculty overall: • Google Scholar including total citations, number of documents, h-index, and number of papers cited 10 times or more, etc.

  12. Estimation Results • 139 universities • 13 statistically significant variables (10 produce parameters that are fixed across observations and 3 that vary across observations) • Random parameters linear regression produced an R-squared value of 0.993 and an adjusted R-squared value of 0.992 (which accounts for the number of estimated parameters in the model)

  13. Observed vs. Model-Predicted

  14. Statistically significant variables and their effect on mean peer assessment scores: College-related attributes

  15. Graduate Record Examination (GRE) scores of incoming graduate students • Positively increases the average peer evaluation • an 8 point increase in the math GRE scores of graduate students would result in roughly a 0.1 point increase in a college of engineering’s average peer assessment

  16. Graduate Admissions • Universities that admit more than 31% of their graduate applicants tend to have average peer assessment scores that are 0.104 less than universities that admit 31% or less

  17. Number of doctoral students graduated • Parameter estimate for this variable suggests that graduating 34 more doctoral students would result in a 0.1 increase in the average peer assessment score

  18. Number of faculty • Total number of faculty, was also found to positively influence average peer assessment scores, with larger faculty size more likely to result in a higher score • For public universities, the model’s parameter estimate suggests that nearly 60 faculty would need to be added for a 0.1 increase in the average peer assessment score • For private universities, the model’s parameter estimate suggests that about 35 faculty would need to be added for a 0.1 increase in the average peer assessment score

  19. Faculty-size threshold • Effects of having less than 200 faculty vary across universities • Having fewer than 200 faculty has negative effect for roughly 68% of the universities and a positive effect for 32%. • This dichotomy is likely picking up the fact that the faculty effect is not just about size, but about quality as well

  20. Graduate student enrollment per engineering faculty member • Was found to have a negative effect on average peer assessment scores • This variable produced a statistically significant random parameter • While negative for all universities, the fact that the values vary across universities suggests that unobserved factors relating to student and faculty quality are affecting the influence that this ratio has on average peer assessment

  21. Graduate student enrollment per engineering faculty member • Was found to have a negative effect on average peer assessment scores • This variable produced a statistically significant random parameter • While negative for all universities, the fact that the values vary across universities suggests that unobserved factors relating to student and faculty quality are affecting the influence that this ratio has on average peer assessment

  22. National Academy Members • Percent of NAE members positively influences peer scores, but effect is highly variable across universities • The effect ranges from a 0.1 increase in the average peer assessment score due to a 1.5% increase in NAE membership, to virtually no effect on the average peer assessment score • NAE effect is likely highly variable due to NAE faculty being at different stages in their career, the variance in academic credentials of NAE members

  23. Faculty h-index • Google Scholar h-index of the 10th most cited engineering faculty member has a positive effect on average peer assessment scores • By definition, an author’s h-index value indicates that the author has published h papers, each of which has been cited at least h times • While many colleges of engineering in the U.S. have been slow to adopt citation information such information has a statistically significant effect on average peer assessment scores

  24. Statistically significant variables and their effect on mean peer assessment scores: Overall university-related attributes

  25. American Association of Universities (AAU) membership • AAU membership of the University improved peer assessment score of 0.138 holding all else constant • AAU membership requires that a wide variety and high level of research and scholarly output be achieved by the university

  26. Highly-cited university faculty • Universities having 5 or more faculty at the university, as a whole, with more than 50,000 Google scholar citations enjoy higher peer rankings (an average 0.159 higher peer assessment scores) than those universities that do not have as many of these faculty • Reflects highly-cited faculty and the university’s position in highly citable fields

  27. Number of post-doctoral researchers • The average number of annual postdoctoral researchers employed at the university as a whole (averaged over the past 15 years) positively influences the average peer assessment score of the college of engineering • The number of post-doctoral students is a factor in AAU membership and other measures of university reputations

  28. SAT scores of incoming students • The past 10-year average of math plus verbal (1600 maximum) Scholastic Aptitude Test (SAT) scores of students entering the university positively influence the average peer assessment scores of engineering programs • A 150 point increase in the average SAT score of all university students will result in a 0.1 increase in the average peer assessment score of the university’s college of engineering

  29. Summary and Conclusions • This paper explored the factors influencing the USNews average peer assessment scores of U.S. engineering programs by estimating a random parameters linear regression using data extracted from a number of sources • Estimation results show that both College and University factors influence peer assessment scores, and the influence of some of these factors varies across universities

  30. Summary and Conclusions (cont.) • Research funding itself was found to be statistically insignificant (although it is needed to generate many of the variables found to be significant) • This is a wake-up call to the many universities who blindly pursue research dollars without carefully thinking about the scholarly productivity that such dollars can potentially produce

  31. Summary and Conclusions (cont.) • U.S. engineering colleges have tended to use promotional brochures, website enhancements, and other means to influence how peers assess them • But our statistical analysis suggests that peer assessments are rooted much deeper in the measurable accomplishments of the college’s faculty, the quality of its students, and the quantifiable achievements of its university as whole

More Related