190 likes | 330 Views
Graduate Program Assessment: A Pilot Study Using a Common Activity and Combined Rubric. Rana Khan, Ph.D., Director, Biotechnology Program Datta Kaur Khalsa, Ph.D., Director of Assessment, Education Department Kathryn Klose, Ph.D., Associate Chair & Director, Finance Management and Accounting
E N D
Graduate Program Assessment: A Pilot Study Using a Common Activity and Combined Rubric Rana Khan, Ph.D., Director, Biotechnology Program Datta Kaur Khalsa, Ph.D., Director of Assessment, Education Department Kathryn Klose, Ph.D., Associate Chair & Director, Finance Management and Accounting Yan Cooksey, Ph.D. Director, Learning Outcomes Assessment, Dean’s Office Sloan Conference Oct 11, 2012
UMUC’s LEVELS OF ASSESSMENT Sloan Conference Oct 11, 2012
UMUC GRADUATE SCHOOL SLEs Sloan Conference Oct 11, 2012
CURRENT APPROACH: 3-3-3 MODEL 3 rounds, over 3 years, at 3 stages 5 SLEs: COMM, THIN, INFO, TECH, KNOW Sloan Conference Oct 11, 2012
ASSESSING THE ASSESSMENT 3-3-3 Model Sloan Conference Oct 11, 2012
COMBINED ACTIVITY/RUBRIC(C2)MODEL Common activity • Topic for all disciplines – “Challenges facing leaders” Combined activity • 4 SLEs (all except KNOW) • SLE criteria from existing rubrics – eliminate overlap • 4-pt scale (Exemplary, Competent, Marginal & Unsatisfactory) Training raters and norming Sloan Conference Oct 11, 2012
3-3-3 VS COMBINED ACTIVITY/RUBRIC (C2) MODEL Sloan Conference Oct 11, 2012
DESIGN OF A PILOT STUDY • Purpose: • To simplify the current assessment process • To increase the process reliability and validity • Methods: • Courses were identified • Faculty chosen to be raters • Norming sessions were conducted • Paper were collected and assessed • Intra-Class Correlation Coefficient (ICC) was calculated Sloan Conference Oct 11, 2012
SPRING 2012 PILOT NORMING Sloan Conference Oct 11, 2012
PHASE I PILOT RESULTS • Intra-class Correlation Coefficient (ICC) • estimation of inter-rater reliability • one-way random effects ANOVA model >0.75, excellent; 0.40 to 0.75, fair to good/moderate; <0.40 poor Source: Fleiss (1986) on ICC values clinical & social science research Sloan Conference Oct 11, 2012
PHASE I PILOT RESULTS Sloan Conference Oct 11, 2012
PHASE I PILOT RESULTS Sloan Conference Oct 11, 2012
PILOT INTENTIONS Consistency in interpretation of rubric Consistency in use of rubric Address variability of data collection Limit extra load on faculty Sloan Conference Oct 11, 2012
LESSONS LEARNED • Review alignment • Consolidate rubric further • Tech management criteria • Norming practice Sloan Conference Oct 11, 2012
FUTURE DIRECTION - PHASE II “Refined Rubric and Random Paper Grading Study.” • Same raters • Same papers but distributed randomly • More norming practice with the refined rubric • Increase evidence of combined rubric validity Sloan Conference Oct 11, 2012
REFERENCE • Fleiss, J. L. (1986). Design and analysis of clinical experiments. New York, NY: John Wiley & Sons. Sloan Conference Oct 11, 2012
CONTACT • Rana Khan: rana.khan@umuc.edu • Datta Kaur-Khalsa:dattakaur.khalsa@umuc.edu • Kathryn Klose: kathryn.klose@umuc.edu • Yan Cooksey: yan.cooksey@umuc.edu Sloan Conference Oct 11, 2012
ACKNOWLEDGEMENTS • John Aje • Diane Bartoo • Nancy Glenn • Kathy Marconi • Dan McCollum • Garth McKenzie • Pat Spencer • Rudy Watson • Bruce Katz • Dawn Rodriguez • Carol De’Arment • Pat Miller • Lisa Parsons • Katie Crockett • Anthony Cristillo Sloan Conference Oct 11, 2012