170 likes | 271 Views
A Review of the Use of Knowledge Mapping for Assessment Purposes. Gregory K. W. K. Chung Eva L. Baker. California Educational Research Association Annual Meeting Rancho Mirage, CA – December 4, 2008. Overview of Talk. Research questions Methodology Reliability Validity Conclusion.
E N D
A Review of the Use of Knowledge Mapping for Assessment Purposes Gregory K. W. K. Chung Eva L. Baker California Educational Research AssociationAnnual Meeting Rancho Mirage, CA – December 4, 2008
Overview of Talk • Research questions • Methodology • Reliability • Validity • Conclusion
Knowledge Maps • Node-link representation (nodes = concepts, links = relationships) • Typically used for instructional purposes • Sometimes used for assessment • Can be scored automatically leads to surface warming sunlight
Research Questions • What are the scoring methods for knowledge maps? • What is the reliability and validity evidence? • What are the feasibility issues?
Methodology • Prior reviews • Ruiz-Primo, M. A., & Shavelson, R. J. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching, 33, 569-600. • Chung, G. K. W. K., Baker, E. L., Brill, D. G., Sinha, R., Saadat, F., & Bewley, W. L. (2003). Automated assessment of domain knowledge with online knowledge mapping. Proceedings of the I/ITSEC, 25, 1168–1179.
Methodology • Criteria for inclusion in review • Empirical studies reported after Ruiz-Primo and Shavelson (1996) • Study reported technical information (reliability or validity information) • CRESST technical reports and CRESST-supported dissertations
Referent-Based / SemanticReferent-Free / Semantic Propositions in map banking crisis —contributed to—> Depression Hoover —part of—> Depression unemployment —contributed to—> Depression New Deal —response to—> Depression etc.
Sample • 38 studies • 23 affiliated with CRESST (UCLA, Stanford) • 15 from other universities • Studies reported • Scoring method • Reliability or validity coefficients
Reliability • Rating of maps (by human or computer) • High reliability—raters can be trained to evaluate knowledge maps • alpha: .6 to .9 • g-coefficient: .8 to .9 • Constraining task yields highest reliability • Use a fixed set of concepts and links • Expert-map referent yields highest reliabilities
Validity • Correlation with other measures of similar content • “Less conceptual” r: .4 to .5 • “More conceptual” r: .4 to .7 • Sensitive to knowledge differences • Experts >> Novices • Pre-instruction < Post-instruction
Feasibility • Human ratings generally tedious, labor-intensive, and time-consuming • Only way to do with unconstrained tasks • Automated scoring feasible • High reliability, immediate feedback • Constrained task
Conclusion • In general, knowledge maps are feasible, reliable, and sensitive to knowledge differences and instructional effects • Task format influences reliability • Feasibility an issue—human rating of knowledge maps can be labor-intensive and tedious • Use of an expert-criterion map yields the highest reliability • Automated scoring tractable with predefined concepts and links