380 likes | 473 Views
Discovering Optimal Training Policies: A New Experimental Paradigm. Robert V. Lindsey, Michael C. Mozer Institute of Cognitive Science Department of Computer Science University of Colorado, Boulder Harold Pashler Department of Psychology UC San Diego.
E N D
Discovering Optimal Training Policies:A New Experimental Paradigm Robert V. Lindsey, Michael C. Mozer Institute of Cognitive ScienceDepartment of Computer ScienceUniversity of Colorado, Boulder Harold Pashler Department of PsychologyUC San Diego
Common Experimental Paradigm In Human Learning Research • Propose several instructional conditions to compare based on intuition or theory • E.g., spacing of study sessions in fact learning • Equal: 1 – 1 – 1 • Increasing: 1 – 2 – 4 • Run many participants in each condition • Perform statistical analyses to establish reliable differencebetween conditions
What Most Researchers Interested In Improving Instruction Really Want To Do • Find the best training policy (study schedule) • Abscissa: space of all training policies • Performance function definedover policy space
Approach • Perform single-participant experiments at selected points in policy space (o) • Use function approximationtechniques to estimate shapeof the performance function • Given current estimate,select promising policiesto evaluate next. • promising = has potentialto be the optimum policy Gaussian processregression linear regression
Gaussian Process Regression • Assumes only that functions are smooth • Uses data efficiently • Accommodates noisy data • Produces estimates of both function shape and uncertainty
Embellishments On Off-The-ShelfGP Regression • Active selection heuristic: upper confidence bound • GP is embedded in generative task model • GP represents skill level (-∞ +∞) • Mapped to population mean accuracy on test (0 1) • Mapped to individual’s mean accuracy, allowing for interparticipant variability • Mapped to # correct responses via binomial sampling • Hierarchical Bayesian approach to parameter selection • Interparticipant variability • GP smoothness (covariance function)
GLOPNOR = Graspability • Ease of picking up & manipulating object with one hand • Based on norms from Salmon, McMullen, & Filliter (2010)
Two-Dimensional Policy Space • Fading policy • Repetition/alternationpolicy
Policy Space fading policy repetition/ alternation policy
Experiment Training • 25 trial sequence generated by chosen policy • Balanced positive / negative • Testing • 24 test trials, ordered randomly, balanced • No feedback, forced choice • Amazon Mechanical Turk • $0.25 / participant
Results # correct of 25
Best Policy • Fade from easy to semi-difficulty • Repetitions initially, alternations later *
Final Evaluation 65.7% 60.9% N=49 N=53 68.6% N=48 66.6% N=50
Novel Experimental Paradigm • Instead of running a few conditions each with many participants, … • …run many conditions each with a different participant. • Although individual participants provide a very noisy estimate of the population mean, optimization techniques allow us to determine the shape of the policy space.
What Next? • Plea for more interesting policy spaces! • Other optimization problems • Abstract concepts from examples • E.g., irony, recyclability, retribution • Motivation • Manipulations • Rewards/points, trial pace, task difficulty, time pressure • Measure • Voluntary time on task
Machine LearningTo Boost Human Learning Robert Lindsey* Jeff Shroyer* Hal Pashler+ Mike Mozer* *University of Colorado at Boulder+University of California, San Diego
Challenge Of Exploiting Spaced Review • The optimal spacing of study depends on • characteristics of the individual student • characteristics of the specific item (e.g., vocabulary word) being learned • the exact study history (timing and retrieval success)
Our Approach Data from a population of students studying a set of items Psychological model of human memory Collaborative filtering Prediction of when a specific student should study a particular item
Experiment In Fall 2012 • Second year Spanish at Denver area middle school • 180 students (6 class periods) • New vocabulary introduced each week for 10 weeks • COLT used 3 times a week for 30 min • Sessions 1 & 2: study new vocabulary to criterion; remainder of time spent on review • Session 3: quiz on new vocabulary;remainder of time spent on review
Comparison Of Three Review SchedulersWithin Student • Massed review (current educational practice) • Generic spaced review • Personalized spaced review using machine learning models
Bottom Line • 17% boost in retention of cumulative course content one month after end of semester • …if students spend the same amount of time using our machine-learning-based review software instead of cramming for the current week’s exam
BRAIN Initiative • One goal of combining cognitive modeling and machine learning:Help people learn and perform more efficiently • learning new concepts • choice and ordering of examples • improving long-term retention • personalized selection of material for review • assisting visual search (e.g., medical, satellite image analysis) • image enhancement • training complex visual tasks (e.g., fingerprint analysis) • highlighting to guide attention • diagnosing and remediating cognitive deficits • via modeling individual differences