640 likes | 661 Views
This project aims to predict and explain individual performance in various tasks by developing detailed computational models based on empirical observations. By combining cognitive modeling features, it strives to understand multi-tasking situations and predict complex performance accurately.
E N D
Predicting and Explaining Individual Performance in Complex Tasks Marsha Lovett, Lynne Reder, Christian Lebiere, John Rehling, Baris Demiral This project is sponsored by the Department of the Navy, Office of Naval Research
Multi-Tasking • A single person can perform multiple tasks. A single model should be able to capture performance on those multiple tasks. • A single person brings to bear the same fundamental processing capacities to perform all those tasks. A single model should be able to predict that person’s performance across tasks from his/her capacities.
A way to keep the multiple-constraint advantage offered by unified theories of cognition while making their development tractable is to do Individual Data Modeling. That is, to gather a large number of empirical/experimental observations on a single subject (or a few subjects analysed individually) using a variety of tasks that exercise multiple abilities (e.g., perception memory, problem solving), and then to use these data to develop a detailed computational model of the subject that is able to learn while performing the tasks. Gobet & Ritter, 2000
ZERO PARAMETER PREDICTIONS!
Basic Goals of Project • Combine best features of cognitive modeling • Study performance in a dynamic, multi-tasking situation (albeit less complex than real world) • Explain not only aggregate behavior but variation (using individual difference variables) • Predict (not fit/postdict) complex performance • Use cognitive architecture and fixed parameters • Employ off-the-shelf models whenever possible • Plug in individual difference params for each person
How to predict task performance • Estimate each individual’s processing parameters • Measure individuals’ performance on “standard” tasks • Using models of these tasks, estimate participant’s corresponding architectural parameters (e.g., working memory capacity, perceptual/motor speed) • Build/refine model of target task • Select global parameters for model of target task (e.g., from previously collected data) • Plug into model of target task each individual’s parameters to predict his/her target task performance
Example: Memory Task Performance • Fit task A to estimate individuals’ parameters
Zero-Parameter Predictions • Plug those parameters into model of task B (Lovett, Daily, & Reder, 2000)
Challenges of Complex Tasks • Modeling the target task is harder • More than one individual difference variable likely impacting target task • Possibility of knowledge/strategy differences
What about knowledge differences? • Develop tasks that reduce their relevance • Train participants on specific procedures • Measure skill/knowledge differences in another task and incorporate them in model • Use model to predict variation in relative use of strategies by way of estimates of individuals’ processing capacities
Individual Differences in ACT-R • Most ACT-R models don’t account for impact of individual differences on performance, but the potential is there • There are many parameters with particular interpretations related to individual difference variables • Most ACT-R modelers set parameters to universal or global values, i.e., defaults or values that fit aggregate data
ACT-R & Individual Differences P1, P2, P3, … M1, M2, M3, … W1, W2, W3, …
Overview of Talk • Review tasks we are studying • Illustrate methodology • Highlight key results • Visual search vs. memory strategies trade off in final performance => complex task modeling offers best constraint with fine-grained analysis
P/M Tasks • In our earlier studies, initial training phase of target task was used to collect data on individuals’ perceptual/motor speed. • e.g., Time to find object “A7” and click on it • In later studies, separate task used to measure perceptual and motor speed.
How to predict task performance • Estimate each individual’s processing parameters • Measure individuals’ performance on MODS, PercMotor • Using models of these tasks, estimate participant’s corresponding architectural parameters (e.g., working memory capacity, perceptual/motor speed) • Build/refine model of target task • Select global parameters for model of target task (e.g., from previously collected data) • Plug into model of target task each individual’s parameters to predict his/her target task performance
W affects Performance • W is the ACT-R parameter for source activation, which impacts the degree to which activation of goal-related facts rises above the sea of other facts’ activations • Higher W => goal-related facts relatively more activated => faster and more accurately retrieved => better MODS performance
Estimating W • Model of MODS task is fit to individual’s MODS performance by varying W • Best fitting value of W is taken as estimate
Estimating PM • For simplicity, we estimated a combined PM parameter directly from each individual’s perceptual/motor task performance. • This PM parameter was then used to scale the timing of the target task’s perceptual-motor productions.
Joint Distribution of W and P/M W and P/M are tapping distinct characteristics
ACT-R & Individual Differences P1, P2, P3, … M1, M2, M3, … W1, W2, W3, …
Specifics of our Approach • Estimate each individual’s processing parameters • Measure individuals’ performance on modified digit span, spatial span, perceptual/motor speed • Using models of these tasks, estimate participant’s W, P, M • Build/refine model of air traffic control task–AMBR • Select global parameters for AMBR model • Plug in individuals’ parameters to predict performance across different AMBR scenarios
AMBR: Air Traffic Control Task • Complex and dynamic task • Spatial and verbal aspects • Multi-tasking • Testbed for cognitive modeling architectures
AMBR TaskAC=aircraft, ATC=air traffice controller • As ATC, you communicate with AC and other ATC to handle all AC in your airspace • Six commands with different triggers: • First ACCEPT, then WELCOME incoming AC (these two separated by short interval) • First TRANSFER, then order a CONTACT message from outgoing AC (these two separated by short interval) • Decide to OK or REJECT requests for speed increase • When a command is not handled before AC reaches zone boundary, this is a HOLD (error)
Issuing an AMBR Command • Text message or radar cues particular action • Click on Command Button • Click on Aircraft (in radar screen) • Click on Air Traffic Controller (if nec’y) • Click on SEND Button
General Methods • Empirical Methods • Day 1: Collect MODS and P/M data and train on AMBR plus AMBR practice • Day 2: Review AMBR instructions, battery of AMBR scenarios • Modeling Methods • Use MODS & PM data to estimate W and PM for each subject • Plug individual W and PM values into AMBR model • Compare individuals’ AMBR performance with model predictions
Experiments 1 & 2 • AMBR Scenario Design • Experiment 1: alternating 5 easy, 5 hard • Experiment 2: 9 scenarios of varying difficulty • AMBR Dependent Measures • Total time to handle each command • Number of hold errors
Off-the-shelf ACT-R Model of AMBR • Scan for something to do: Radar, Left, Right, Bottom text windows • When an action cue is noticed, determine if it has been handled or not: scan/remember • If the cue has not been handled, click command, AC, [ATC], SEND • Resume scanning
Model Predictions • Prediction of whether a subject commits an error in a scenario, based on scenario details and individual’s W & P/M
Ind’l Diffs’ Impact on Hold Errors • Hold errors only weakly dependent on W, more strongly on P/M and scenario difficulty # Hold Errors Parameter Value
Scenario Difficulty Scenario
Mean Errors by Scenario Scenario
Be Careful What (DM) you Model • Error data too coarse to constrain model • Even total RT/command data insufficient • Model predicts that scanning strategy plays a large role in performance. • This is consistent with participant reports who may be doing any combination of visual search or memory retrieval
Subject T 0.0 Cue: Accept T6? T 3.6 ACCEPT button T 5.9 AC “T6” T 6.7 ATC “EAST” T 7.7 SEND button Model T 0.0 Cue: Accept T6? T 3.7 ACCEPT button T 5.7 AC “T6” T 7.0 ATC “EAST” T 8.2 SEND button Observable Behaviors Stochastic variation on the single-action level is part of subject and model behavior
Model I/O T 0.0 Cue: Accept T6? T 3.7 ACCEPT button T 5.7 AC “T6” T 7.0 ATC “EAST” T 8.2 SEND button Model Trace T 1.5 Notice cue T 2.5 Subgoal task T 3.7 Mouse click T 3.8 Start AC search T 4.9 Find AC T 5.7 Mouse click T 7.0 Mouse click T 8.2 Mouse click The Details Are Inside
Conclusion thus far… • Visual search vs. memory strategies trade off in final performance => even when modeling a complex task, coarse dependent measures (accuracy, total RT) hide important details • Previous AMBR model fit group data well • Only by seeking extra constraint of modeling individual participants were important gaps in model fidelity revealed
Modifications for Experiment 3 • Use more fine-grained measures: Action RT & Clicks • Modify the ATC task to increase memory demand • More interesting for our purposes • More realistic • Lengthen scenario length so same planes are in play • Hide AC names until click, then only after delay • Use model to bracket appropriate difficulty level
Raw Characteristics of Data Experiment 3 • Action RT 12.1 sec, Holds 3.3 / subject • Action RT correlates with W (r = -0.314) and Pm (r = 0.485) • Holds correlates with W (r = -0.444) and Pm (r = 0.508)
Model Modifications • Search not only can give the answer sought (a specific AC’s location) but an additional rehearsal of that information • In slack times, possible strategy of studying radar screen to rehearse AC names (called “exploratory clicks”)
Model Predicts Hold Errors • Predicts errors per subject, r = 0.81 • Hold errors depend more on W (compared to previous version of task) but still mostly dependent on PM and scenario difficulty • Move to modeling more fine-grained aspects of data…
W, P/M affect RT click by click Hi-Hi Model & Subject • Set W-P/M parameters in model corresponding to participants (e.g., hi-hi & lo-lo) • Run model to produce RT predictions click by click (for 2 commands: Accept and Contact) Lo-Lo Model & Subject
W, P/M affect RT click by click • Set W-P/M parameters in model corresponding to participants • Run model to produce RT predictions click by click (for 2 commands: Accept and Contact)
Conclusion thus far • Modeling more fine-grained measures required task and model modifications, but this produced individual participant predictions that were very promising. • Clicking on correct AC the first time ranges from 69% to 96% • Akin to remember vs. scan strategies • Higher number -> more (accurate) remembering • This detailed aspect of performance relates to W
Theoretical Interlude:Spatial vs. Verbal WM • Our working assumption (parsimoniously) posits a single source activation parameter, W • W modulates the degree to which goal-relevant facts are activated above the sea of unrelated facts • …regardless of spatial/verbal representation • This perspective still allows for spatial/verbal distinctions in performance but explains them as a function of differences in spatial/verbal skills etc.