110 likes | 261 Views
A. Transition Differential Summation Model. For Predicting Two-choice Distributions. Shaun Foley Jon Wetzel led by Brett Mensh. Transition Differential Summation. 0 1 0 1 1 0. .35 .28 .27 .38 .36 .35 .28. 1.) Load Global TDS vector. 0->1 1->0 0->1 1->1 1->0. -.07 -.01 -.01
E N D
A Transition Differential Summation Model For Predicting Two-choice Distributions Shaun Foley Jon Wetzel led by Brett Mensh
Transition Differential Summation 0 1 0 1 1 0 . . . .35 .28 .27 .38 .36 .35 .28 . . . 1.) Load Global TDS vector 0->1 1->0 0->1 1->1 1->0 -.07 -.01 -.01 .11 -.02 x oldscale 2.) Scale and add recent history 0->0 0->1 1->0 1->1 = d*rscale + rbase
Transition Differential Summation 0 1 0 1 1 0 . . . .35 .28 .27 .38 .36 .35 .28 . . . 3.) Examine relevant values 0->1 1->0 0->1 1->1 1->0 A B X=B-A -.07 -.01 -.01 .11 -.02 4.) Calculate Pr[choose 1](X) 1 0 ± bigdiff 0->0 0->1 1->0 1->1
Transition Differential Summation 0 1 0 1 1 0 . . . .35 .28 .27 .38 .36 .35 .28 . . . Parameters: 0->1 1->0 0->1 1->1 1->0 -.07 -.01 -.01 .11 -.02 r - initial exploration n - recent memory oldscale - past memory rscale - sensitivity to reward delta rbase - resistance to change bigdiff - cautiousness
Results The Best Parameters • Picked 170 parameter combinations. • Trained model on pilot 1 data for each of the 6 methods. • Selected the top 5 based on mean correctness • Used a hill-climbing algorithm to maximize top 5. Actual choice history: …0 1 0 1 1 0 1 0 1… Predicted allocation: .68 Actual allocation: .60 Error: .08 Every 10 trials
Results Final Training Parameters Dataset M.C. r n oldscale rscale rbase bigdiff 1 .8885 10 6 0.0 .26 .07 .35 1r .8857 10 3 0.0 .92 .11 .20 2 .9266 10 4 0.0 .90 .17 .36 2r .9447 10 6 0.0 .46 .17 .56 3 .8509 15 6 .01 .66 .06 .43 3r .8577 15 4 .01 .96 .07 .35
Results Parameter Analysis: Method Trends Method 1 vs. Method 3 • Model says method 3 is “harder” • Requires more initial exploration (↑r) • More caution in decision making (↑bigdiff) • More sensitive to Δreward (↑rscale) • Past choices matter more (↑oldscale)
Results Parameter Analysis: Method Trends Forward vs. Reverse • Model says forward is “harder”. • Harder to make sure of decisions (↑bigdiff) • More dynamic choices (↓rbase) • Considered more choices taken into account as recent (↑n) • Larger increase in sensitivity to Δreward in M1→M3 than M3→M1 • Logical: Going from easy to hard is easier than going from hard to easy.
Results Best Parameters on Pilot 2
Conclusions • Transition differential summation model performed acceptably. • Small number of recent trials almost entirely determines actions. • Changes in reward affect choice more than overall reward.