230 likes | 244 Views
This article provides an overview of the steps involved in reviewing and improving a Quality Assurance System (QAS), including assessing assessment instruments, determining data reliability, examining validity evidence, and seeking external expert reviews.
E N D
Steps to Review and Improve Your Quality Assurance System Christina O’Connor and Kristen Smith The University of North Carolina at Greensboro
What is a Quality Assurance System (QAS)? • Systematic collection of data • Includes multiple measures • Measures are relevant, verifiable, representative, cumulative and actionable • Provides empirical evidence Council for the Accreditation of Educator Preparation , 2013
Why is the QAS important? • The QAS is where you collect evidence for all standards • Gaps in the QAS can negatively impact evidence of meeting other standards
Think-Pair-Share • Think about potential barriers to a high quality QAS • Discuss with an elbow partner • Share with the group
Reviewing your QAS • Look for indicators of quality as defined by CAEP • Is your process systematic? • Are there multiple measures in place for each component? • Are the measures relevant, verifiable, representative, cumulative and actionable? • Is there empirical evidence?
Four Steps • Step 1- Review assessment instruments • Step 2- Determine reliability of data • Step 3- Examine validity evidence • Step 4- External and expert review
Action Plan • Work in pairs or small groups • Create an action plan for how you will implement these steps at your institution to review your QAS • Identify Key Collaborators
Step 1- Review assessment instruments • Why? • If instruments are not sound, the data they provide may not be high quality • “Bad” instrumentation can compromise integrity of QAS • Who? • Insider/outsider perspectives • Teacher educator preparation specialist • Assessment specialist or expert
Step 1- Review assessment instruments • What? (Rudner, 1994) • Instrument intended use or purpose • Instrument construction (e.g., item types, formats, organization) • Content validity • Instrument administration & data collection procedures • Results reporting and/or data sharing • Data use
Action Plan • Work in pairs or small groups • Create an action plan for how you will implement Step 1 at your institution to review your QAS • Identify Key Collaborators
Step 2- Determine reliability of data • Why? • If data are unreliable, should we be using them to make decisions about our programs or our candidates? • Reliability of data is necessary precursor to validity (Crocker & Algina, 2008) • CAEP standard 5.2 • Although CAEP does not require reliability evidence for scores on proprietary instruments, reliability is sample specific, so EPPs should still examine reliability of their data (Traub & Rowley, 1991)
Step 2- Determine reliability of data • Who? • Assessment or measurement expert • Assessment, measurement, edu research, or statistics graduate students • What? • Using software like SPSS, SAS, R • Cronbach’s alpha reliability coefficient for internal consistency • Inter-rater reliability for when candidate is evaluated by two raters • Exact agreement % • Inter Class Correlation (ICC) • G-theory Instruments are neither valid nor reliable- DATA ARE!
Action Plan • Work in pairs or small groups • Create an action plan for how you will implement Step 2 at your institution to review your QAS • Identify Key Collaborators
Step 3-Examine validity evidence • Why? • If we lack validity evidence, howdo we know we are making accurate inferences about our programs or our candidates? • Even if data are reliable, they may not necessarily be valid • Validity evidence helps support the accuracy of the conclusions that you draw from scores on the assessment instrument
Step 3- Examine validity evidence • Who? • Assessment or measurement expert • Assessment, measurement, edu research, or statistics graduate students • Faculty or other Subject Matter Experts (SMEs)
Step 3-Examine validity evidence • What? • Numerous “types” of validity evidence should be considered- from content to predictive validity (Cronbach & Meehl, 1955) • Content validity • Lawshe Method • Convergent /Divergent validity • Associations or correlations • Predictive • Utility of assessment scores to predict dependent variable of interest Instruments are neither valid nor reliable- DATA ARE!
Action Plan • Work in pairs or small groups • Create an action plan for how you will implement Step 3 at your institution to review your QAS • Identify Key Collaborators
Step 4- External and expert review • Why? • The more frequently programs consult with assessment experts, the higher quality their assessment processes tend to be (Fulcher & Bashkov, 2012) • Aligns with CAEP standard 5.5. • Who? • Assessment specialist or expert • Individual external to your EPP
Step 4- External and expert review • What? • Share your assessment processes and results with assessment expert and external reviewer in the same way they would be shared with your faculty • Assessment expert and external reviewer can objectively evaluate assessment results and the system for sharing results with program faculty • Reviewers should be able to create actionable steps for program improvement based on the results that were shared with them
Action Plan • Work in pairs or small groups • Create an action plan for how you will implement Step 4 at your institution to review your QAS • Identify Key Collaborators
Wrap Up Identify one person you will share this action plan with when you get back to your institution
Contact Us Christina O’Connor, PhD Director of Professional Education Preparation, Policy, and Accountability, UNCG School of Education ckoconno@uncg.edu Kristen Smith, PhD Director of Assessment, UNCG School of Education k_smith8@uncg.edu
References Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43(1), 22-27. doi: 10.1080/00091383.2011.538642 Council for the Accreditation of Educator Preparation. (2013). Council for the Accreditation of Educator Preparation 2013 CAEP Standards. Retrieved from http://caepnet.files.wordpress.com/2013/05/annualreport_final.pdf Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Orlando, FL: Holt, Rinehart and Winston. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin (52), 281-302. Fulcher, K. H., & Bashkov, B. M. (2012, November-December). Do we practice what we preach? The accountability of an assessment office. Assessment Update, 24(6). 5-7, 14. Fulcher, K. H. & Orem, C. D. (2010). Evolving from quantity to quality: A new yardstick for assessment. Research and Practice in Assessment, 5, 13-17. Messick, S. J. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18, 5-11. Rodgers, M., Grays, M. P., Fulcher, K. H., & Jurich, D. P. (2013) Improving academic program assessment: A mixed methods study. Innovative Higher Education, 38(5), 383-395. Rudner, L. M. (1994). Questions to ask when evaluating tests. Practical Assessment, Research & Evaluation,4 (2). Retrieved August 9, 2005 fromhttp://PAREonline.net/getvn.asp?v=4&n=2. Traub, R. E., & Rowley, G. L. (1991), Understanding reliability. Educational Measurement: Issues and Practice, 10, 37-45.