600 likes | 700 Views
KCP Lecture 2: Perception and Attention. Prof.dr. Jaap Murre University of Maastricht University of Amsterdam jaap@murre.com http://neuromod.org. Overview. Neural networks for recognition recognition as constraint satisfaction … and as finding deep attractors
E N D
KCP Lecture 2: Perception and Attention Prof.dr. Jaap Murre University of Maastricht University of Amsterdam jaap@murre.com http://neuromod.org
Overview • Neural networks for recognition • recognition as constraint satisfaction • … and as finding deep attractors • Some basic findings in vision • Perception, lateralization, and consciousness
Neural Networks Recognition, constraint satisfaction, Attractor networks, and the Hebb learning rule
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B
L.. C.. .A. ..P ..B i. Only one word can occur at a given position LAP CAP CAB
ii. Only one letter can occur at a given position LAP CAP CAB L.. C.. .A. ..P ..B
iii. A letter-on-a-position activates a word LAP CAP CAB L.. C.. .A. ..P ..B
LAP CAP CAB L.. C.. .A. ..P ..B iv. A feature-on-a-position activates a letter
The final interpretation must satisfy many constraints In the recognition of letters and words: i. Only one word can occur at a given position ii. Only one letter can occur at a given position iii. A letter-on-a-position activates a word iv. A feature-on-a-position activates a letter
Given a net input, netj, find aj so that -netjaj is minimized • If netj is positive set aj to 1 • If netj is negative set aj to -1 • If netj is zero, don’t care (leave aj as is) • This activation rule ensures that the energy never increases • Hence, eventually the energy will reach a minimum value
Attractor • An attractor is a stationary network state (configuration of activation values) • This is a state where it is not possible to minimize the energy any further by just flipping one activation value • It may be possible to reach a deeper attractor by flipping many nodes at once • Conclusion: The Hopfield rule does not guarantee that an absolute energy minimum will be reached
Attractor Local minimum Global minimum
Example: 8-Queens problem • Place 8 queens on a chess board such that they are not able to take each other • This implies the following three constraints: • 1 queen per column • 1 queen per row • 1 queen on any diagonal • This encoding of the constraints ensures that the attractors of the network correspond to valid solutions
The constraints are satisfied by inhibitory connections Column Diagonals Row Diagonals
Problem: how to ensure that exactly 8 nodes are 1? • A term may be added to control for this in the activation rule • Binary nodes may be used with a bias • It is also possible to use continuous valid nodes with Hopfield networks (e.g, between 0 and 1)
Vision Change Blindness
What and where pathways from the occipital cortex Where What
The code of the brain • Extremely localized coding • 0000000000000000010000000000000000 • Semi-distributed or sparse coding • 0000100000100000010000000010000000 • Distributed coding • 1010111000101100110101000110111000
Sparse coding • Forms a good middle ground between fully distributed and extremely localized coding • Is biologically plausible • Is computationally sound in that it allows very large numbers of representations with a small number of units
Ann Treisman’s model of feature perception and integration • The different maps are sparsely activated • Different maps are used, rather than a combined map • Co-activation is used to code for conjunction • Perceptual confusion may arise
Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)
Neurons in IT show evidence of ‘short-term memory’ for events Human Monkey • Delayed matching-to-sample task • Many cells reduce their firing if they match the sample in memory • Several (up to five) stimuli may intervene • The more similar the current stimulus is to the stimulus in memory
Neural population response to familiar stimulus first decreases, after presentation of ‘target’, then decreases during delay period, increases during early choice, and stabilizes about 100ms before the saccade
Reduced IT response and memory • Priming causes a reduction of firing in IT • This may be a reduced competition • This results in a sharpening of the population response • This in turns leads to a sparser representation
Novelty filtering • Desimone et al.: IT neurons function as ‘adaptive filters’. They give their best response to features to which they are sensistive but which they have not recently seen (cf. Barlow) • This is a combination of familiarity and recency • Reduction in firing occurs when the animal (or the neuron) becomes familiar with the stimulus • This can be an effect of reduced competition
There are several ways to investigate brain lateralization • Split-brain patients • Amytal testing • Dichotic listening and other lateralized experimental procedures