1 / 36

Learning Factors Analysis – A General Method for Cognitive Model Evaluation and Improvement

This paper presents a general method for evaluating and improving cognitive models through learning curve analysis. The method involves identifying blips in the learning curve, proposing alternative cognitive models, and evaluating each model quantitatively.

amccurley
Download Presentation

Learning Factors Analysis – A General Method for Cognitive Model Evaluation and Improvement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Factors Analysis – A General Method for Cognitive Model Evaluation and Improvement Hao Cen, Kenneth Koedinger, Brian Junker Human-Computer Interaction Institute Carnegie Mellon University

  2. Learning curve analysis by hand & eye … • Steps in programming problems where the function (“method”) has two parameters (Corbett, Anderson, O’Brien, 1995)

  3. Can learning curve analysis be automated? • Learning curve analysis • Identify blips by hand & eye • Manually create a new model • Qualitative judgment • Need to automatically: • Identify blips by system • Propose alternative cognitive models • Evaluate each model quantitatively

  4. Overview • A Geometry Cognitive Model and Log Data • Learning Factors Analysis algorithm • Experiments and Results

  5. Domain of current study • Domain of study: the area unit of the geometry tutor • Cognitive model: 15 skills • Circle-area • Circle-circumference • Circle-diameter • Circle-radius • Compose-by-addition • Compose-by-multiplication • Parallelogram-area • Parallelogram-side • Pentagon-area • Pentagon-side • Trapezoid-area • Trapezoid-base • Trapezoid-height • Triangle-area • Triangle-side

  6. Log Data -- Skills in the Base Model

  7. Overview • Cognitive Models & Cognitive Tutors • Literature Reviews on Model Improvement • A Geometry Cognitive Model and Log Data • Learning Factors Analysis algorithm • Experiments and Results

  8. Learning Factors Analysis Logistic regression, model scoring to fit statistical models to student log data A* search algorithm with “smart” operators for proposing new cognitive models based on the factors a set of factors that make a problem-solving step more difficult for a student

  9. The Statistical Model p  Probability of getting a step correct (p) is proportional to: • if student i performed this step = Xi, add overall “smarts” of that student = i • if skill j is needed for this step = Yj, add easiness of that skill = jadd product of number of opportunities to learn = Tj & amount gained for each opportunity = j Use logistic regression because response is discrete (correct or not) Probability (p) is transformed by “log odds” “stretched out” with “s curve” to not bump up against 0 or 1 (Related to “Item Response Theory”, behind standardized tests …)

  10. Difficulty Factors • Difficulty Factors -- a property of the problem that causes student difficulties • Like first vs. second parameter in LISP example above • Four factors in this study • Embed: alone, embed • Backward: forward, backward • Repeat: initial, repeat • FigurePart: area, area-difference, area-combination, diameter, circumference, radius, side, segment, base, height, apothem Embed factor: Whether figure is embedded in another figure or by itself (alone) Example for skill Circle Area: Q: Given AB = 2, find circle area in the context of the problem goal to calculate the shaded area A B A B

  11. Combinatorial Search • Goal: Do model selection within the logistic regression model space Steps: • Start from an initial “node” in search graph • Iteratively create new child nodes by splitting a model using covariates or “factors” • Employ a heuristic (e.g. fit to learning curve) to rank each node • Expand from a new node in the heuristic order by going back to step 2

  12. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  13. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  14. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  15. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  16. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  17. System: Best-first Search • an informed graph search algorithm guided by a heuristic • Heurisitcs – AIC, BIC • Start from an existing model

  18. The Split • Binary Split -- splits a skill a skill with a factor value, & a skill without the factor value. After Splitting Circle-area by Embed

  19. The Heuristics • Good model captures sufficient variation in data but is not overly complicated • balance between model fit & complexity minimizing prediction risk (Wasserman 2005) • AIC and BIC used as heuristics in the search • two estimators for prediction risk • balance between fit & parisimony • select models that fit well without being too complex • AIC = -2*log-likelihood + 2*number of parameters • BIC = -2*log-likelihood + number of parameters * number of observations

  20. Overview • Cognitive Models & Cognitive Tutors • Literature Reviews on Model Improvement • A Geometry Cognitive Model and Log Data • Learning Factors Analysis algorithm • Experiments and Results

  21. Experiment 1 • Q: How can we describe learning behavior in terms of an existing cognitive model? • A: Fit logistic regression model in equation above (slide 27) & get coefficients

  22. Experiment 1 Higher intercept of skill -> easier skill Higher slope of skill -> faster students learn it • Results: Higher intercept of student -> student initially knew more The AIC, BIC & MAD statistics provide alternative ways to evaluate models MAD = Mean Absolute Deviation

  23. Experiment 2 • Q: How can we improve a cognitive model? • A: Run LFA on data including factors & search through model space

  24. Experiment 2 – Results with BIC • Splitting Compose-by-multiplication into two skills – CMarea and CMsegment, making a distinction of the geometric quantity being multiplied

  25. Experiment 3 • Q: Will some skills be better merged than if they are separate skills? Can LFA recover some elements of original model if we search from a merged model, given difficulty factors? • A: Run LFA on the data of a merged model, and search through the model space

  26. Experiment 3 – Merged Model • Merge some skills in the original model to remove some distinctions, add as a difficulty factors to consider • The merged model has 8 skills: • Circle-area, Circle-radius => Circle • Circle-circumference, Circle-diameter => Circle-CD • Parallelogram-area and Parallelogram-side => Parallelogram • Pentagon-area, Pentagon-side => Pentagon • Trapezoid-area, Trapezoid-base, Trapezoid-height => Trapezoid • Triangle -area, Triangle -side => Triangle • Compose-by-addition • Compose-by-multiplication • Add difficulty factor “direction”: forward vs. backward

  27. Experiment 3 – Results

  28. Experiment 3 – Results • Recovered three skills (Circle, Parallelogram, Triangle) => distinctions made in the original model are necessary • Partially recovered two skills (Triangle, Trapezoid) => some original distinctions necessary, some are not • Did not recover one skill (Circle-CD) => original distinction may not be necessary • Recovered one skill (Pentagon) in a different way => Original distinction may not be as significant as distinction caused by another factor

  29. Beyond Experiments 1-3 • Q: Can we use LFA to improve tutor curriculum by identifying over-taught or under-taught rules? • Thus adjust their contribution to curriculum length without compromising student performance • A: Combine results from experiments 1-3

  30. Beyond Experiments 1-3 -- Results • Parallelogram-side is over taught. • high intercept (2.06), low slope (-.01). • initial success probability .94, average number of practices per student is 15 • Trapezoid-height is under taught. • low intercept (-1.55), positive slope (.27). • final success probability is .69, far away from the level of mastery, the average number of practices per student is 4. • Suggestions for curriculum improvement • Reducing the amount of practice for Parallelogram-side should save student time without compromising their performance. • More practice on Trapezoid-height is needed for students to reach mastery.

  31. Beyond Experiments 1-3 -- Results • How about Compose-by-multiplication? With final probability .92 students seem to have mastered Compose-by-multiplication.

  32. Beyond Experiments 1-3 -- Results • However, after split CMarea does well with final probability .96 But CMsegment has final probability only .60 and an average amount of practice less than 2 Suggestions for curriculum improvement: increase the amount of practice for CMsegment

  33. Conclusions and Future Work • Learning Factors Analysis combines statistics, human expertise, & combinatorial search to evaluate & improve a cognitive model • System able to evaluate a model in seconds & search 100s of models in 4-5 hours • Model statistics are meaningful • Improved models are interpretable & suggest tutor improvement • Planning to use LFA for datasets from other tutors to test potential for model & tutor improvement

  34. Acknowledgements • This research is sponsored by a National Science Foundation grant to the Pittsburgh Science of Learning Center. We thank Joseph Beck, Albert Colbert, and Ruth Wylie for their comments.

  35. END

  36. To do • Reduce DFA-LFA.ppt, get from ERM lecture • Go over 2nd exercise on creating learning curves (from web site) in this talk & finish in 2nd session? • Print paper …. • Other • Mail LOI feedback to Bett, add Kurt’s refs

More Related