300 likes | 704 Views
Attention. Outline: Overview bottom-up attention top-down attention physiology of attention and awareness inattention and change blindness. Credits: major sources of material, including figures and slides were:
E N D
Attention • Outline: • Overview • bottom-up attention • top-down attention • physiology of attention and awareness • inattention and change blindness
Credits: major sources of material, including figures and slides were: • Itti and Koch. Computational Modeling of Visual Attention. Nature Reviews Neuroscience, 2001. • Sprague, Ballard, and Robinson. Modeling Attention with Embodied Visual Behaviors, 2005. • Fred Hamker. A dynamic model of how feature cues guide spatial attention. Vision Research, 2004. • Frank Tong. Primary Visual Cortex and Visual Awareness. Nature Reviews Neuroscience, 2003. • and various resources on the WWW
How to think about attention? • William James: “Everyone knows what attention is” • overt vs. covert attention • attention as a filter • attention as enhancing the signal produced by a stimulus • tuning system to a specific stimulus attribute • attention as a spotlight • location-, feature-, object-, modality-, task- based • attention as binding together features • attention as something that speeds up processing • attention as distributed competition
Important Questions • what is affected by attention? • where in the brain do we see differences between attended/unattended conditions? • what controls attention? • how many things can you attend to? • is attention a useful notion at all? Or is it too blunt and unspecific?
Points to note: • saliency of location depends on its surround • integration into single saliency map (where?) • inhibition of return is important • how are things updated across eye movements • purely bottom-up models provide very poor fit to most experimental data
infants may be primarily driven by visual saliency at about a year of age they start with gaze-following: “looking where sombody else is looking” foundational skill important for learning language, ... does not emerge normally in certain developmental disorders theory: they learn to exploit the caregiver’s direction of gaze as a cue to where interesting things are Looking to maximize visual reward G. Deák, R. Flom, and A. Pick (18 and 12 month-olds)
Carlson & Triesch (2003): discrete regions of space (N=10) interesting object/event in one location, some- times moving randomly caregiver (CG) looks at object with probability pvalid Infant: • can look at CG or any region of space • only sees what is in the region it looks at • decides when and where to shift gaze
Overview of Infant Model • Infant model is simple two agent system (Findlay & Walker, 1999): • “when agent” decides when to shift gaze • “where agent” decides where to look fixation time object in view inst. reward “when” agent: shift gaze? yes/no instantaneous reward (habituating) “where” agent: where to? CG in view CG head pose new location
TD error Infant Model Details Habituation: reward for looking at an object decreases over time: β: habituation rate, hfix(0) habituation level at beginning of fixation, t: time since start of fixation Agents learn with tabular SARSA algorithm: Q: state action value, α: learning rate Q(st,at) = Q(st,at) + α[rt+1+ γQ(st+1,at+1) - Q(st,at)] Softmax action selection balances exploration/exploitation (τ >0: temperature)
Simulation Results Caregiver Index (CGI): ratio of gaze shifts to CG Gaze Following Index (GFI): ratio of gaze shifts following CG’s line of regard (error bars are standard deviations of 10 runs) learning time • basic set indeed sufficient for gaze following to emerge • model first learns to look at CG, then learns gaze following
no learning if things that CG looks at are not rewarding no learning if CG aversive (Autism?) learning poor if CG too rewarding (Williams syndrome?) Variation of Reward Structure time until GFI>0.3
Sprague, Ballard, and Robinson (2005): VR platform to study visual attention in complex behaviors where several goals have to be negotiated (“Walter”) rewards are coupled to successful completion of behaviors Scheduling Visual Routines
Maximum Q Values and best actions: obstacle avoidance sidewalk following litter pickup
Growing uncertainty about state unless you look: • control of eye gaze by behavior that experiences biggest loss due to uncertain state information
comparing Walter to Human subjects in same task: how often does a behavior control gaze in the “on sidewalk” context?
comparing Walter to Human subjects in same task: which behavior controls the eye gaze across different contexts?
Motter (1994) Modulation of V4 activity
Hamker (2004) Model
a. switching from red to green. b. spatial effects due to feedback from premotor areas
Model vs. Experiment: Experiment Model
Super, Spekreijse, and Lamme (2001): monkey’s task: detect texture defined region and saccade to it record from orientation selective cell in V1 how is cell’s response correlated with monkey’s percept? Detection of stimuli and V1 activity
enhancement of late (80-100ms) response only if target is actually detected by the monkey “seen” “not seen”