250 likes | 410 Views
Perception of Space Perception of 3 D structure - static observer – stereo, motion parallax, geometric cues etc Representation of structure and location of objects – moving observer Need to interact with objects – aiming movements
E N D
Perception of Space Perception of 3 D structure - static observer – stereo, motion parallax, geometric cues etc Representation of structure and location of objects – moving observer Need to interact with objects – aiming movements Need to get from one place to another in large scale space - Navigation
Moving observer: Self motion generated optic flow. Point of heading indicated by “focus of expansion” Humans can locate FOE within a few degrees (Warren) A bit problematic is S is fixating off the FOE Later: Simon Rushton – use of visual direction to control heading Warren – both are used. Note Srinivasan – bees Also flow influences walking speed. Flow is not necessary for estimating distance travelled. Blind walking very accurate - Loomis Vestibular + proprioception/efference copy. These Are usually highly correlated. A cue conflict induces Re-calibration eg treadmill walking.
Systematic distortions of perceived visual speed during walking enhance perceptual precision in the measurement of visual speed Precision more important than accuracy (prism adaptation expt - demo)
By slowing down the apparent rate of visual flow during self-motion, our visual system is able to perceive differences between actual and expected flow more precisely. This is useful in the control of action. Eg intercepting a moving object while walking ((??) Cf Barlow – subtracting mean improves discrimination. Previously, apparent slowing of optic flow during walking had been interpreted as a suppression of flow to promote the perception of a stable world.
Optic flow “parsing” – flow generated by ego motion from that generated by object motion
Stationary observer: cloud of limited lifetime dots. Note this is a cue conflict situation. No vestibular signal.
Effect of optic flow is mediated by global not local effects.
Simulate ground plane plus sky Discounting background flow despite lack of overlap. he magnitude of the effect in the Opposite condition appears to be around 60%–70% of that in the Full condition.
Areas implicated?? MSTd (note vestibular input)) Also V7a VIP CSv Note – Angleaki – Bayesian combination of visual and vestibular information For evaluation of self motion Note – parsing without vection
Figure 1. Optic flow field and decomposition into self-motion and object-motion components. Fajen & Matthis: What is the contribution of non-visual factors about self-motion to flow parsing.. Proprioceptive, effecrent commands, intertial (vestibular) cues. Fajen BR, Matthis JS (2013) Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion. PLoS ONE 8(2): e55446. doi:10.1371/journal.pone.0055446 http://www.plosone.org/article/info:doi/10.1371/journal.pone.0055446
Use HMD in VR – normal walking. Subject judges whether he/she could pass through the aperture formed by two converging objects if walking as fast as possible.
On catch trials manipulate gain between actual walking speed and speed of flow field. This dissociates non-visual from visual cues. Retinal motion generated by moving objects remains same. • Result: Gain influences judgment, but not as much as expected from visual manipulation • Only 20% of predicted effect. Therefore non-visual influence on self-motion perception • contributes to flow parsing.
However, is flow parsing necessary for interception of moving objects during self-motion? Can use constant bearing angle strategy. Unresolved Calibration and prediction is another possibility.
Multiple object tracking http://ruccs.rutgers.edu/finstlab/MOT-movies/MOT-Occ-baseline.mov
Wolbers et al NN 2008 Spatial Updating Because egocentric object locations constantly change as we move through an environment, only continuous updating enables us to effectively act on objects or to avoid getting lost. This process has been termed spatial updating, and it is of major importance whenever objects go out of view, when people walk with little vision in the dark. Successful spatial updating requires an observer to perceive the initial spatial positions of external objects and to create a corresponding internal representation. Subdivisions of the posterior parietal cortex code for spatial location in multiple body-based reference frames. Such locational cues form the basis of an egocentric map of the surrounding space that critically depends on the precuneusand connected inferior and superior parietal areas. Unknown howthehuman brain continuously integrates the wealth of incoming information during complete body displacements.
Only the precuneus and the left dorsal precentralgyrus showed a combination of both main effects in the delay phase (middle); not only were BOLD responses elevated during updating as compared with static trials, but they also showed a linear increase related to the number of objects. This indicates that both regions are sensitive to working memory load and the presence of optic flow, suggesting a prominent role for spatial updating.
Humans can update up to four spatial positions during simulated self-motion. Pointing errors and reaction times increased with increasing working memory load And were elevated when self-motion cues were present. Activation in the precuneus and the dorsal precentral30 gyrusclosely followed both experimental manipulations, thus suggesting their importance for the updating process. Only the Pre-cuneus involved when pointing response required. support for the existence of transient spatial maps in medial parietal cortex. Visual spatial updating linked to the interplay of self-motion processing with the construction of updated representations in the precuneus and the context-dependent planning of potential motor actions in dorsal precentralgyrus. When navigating in familiar environments or over longer durations, humans predominantly monitor changes in orientation and position using path integration and later reconstruct object locations from enduring allocentric representations. Medial prefrontal cortex and hippocampus - involved in visual path integration – position only updating versus Spatial updating over short time scales in novel environments operates on transient, egocentric representations, in which the relationship between each object and the observer must be constantly updated as the observer move
Electrocortical stimulation in the precuneus can induce the sensation of translational self-motion39 and BOLD responses in this structure correlate with the subjective experience of self-motion1 updating of the stored egocentric object vectors mediated by dense connections between area MST and the precuneus5, providing the latter with crucial information about translational self motion may contain a human homolog of the monkey parietal reach region the storing and updating of egocentric representations of space, independent of potential actions, constitutes the most parsimonious interpretation of the activation in the precuneus Dorsal pre-motor: whenever subjects responded by pointing, the egocentric spatial map in the precuneuswas transformed into corre- sponding vectors for pointing movements in PMd, which could be accomplished via direct connections between both regions.
1. Top-down and bottom-up signals of attention control are not totally separated, and my question is where are they integrated? A paper by Thompson et al. (2005) shows that FEF has a salience map that topologically integrate those signals as revealed by error signals. Do you know any other regions also have similar or different mechanisms that integrate bottom-up and top-down signals? 2. When we discussed the difficulty to attach labels or names to smell, I was thinking about the creation of language and how that is limited in terms of naming or describing smells. My thought was this could possibly be attributed to 1) wiring of language related regions to regions that process visual features of objects throughout the development of language, and 2) relatively few connection between olfactory structures in the brain and language areas compared to visual and language areas. In the textbook the possible reasons described are 1) the processing of odors skip thalamus, which is relevant for language processing, 2) competition between odor and language processing for cognitive resources. If there's competition of resources, why could the study that used odor cues to reactivate declarative memory during sleep work (the one you mentioned in class)? Declarative memory must be encoded when the odors are presented during learning. And odor also has also been used to be the cue for memory of word list (eg. Herz 1997).
You mentioned that humans have about 25 receptors for bitter taste, as opposed to only 3 receptors for sweet taste. Your/the book's explanation for this is essentially evolution, that humans needed to protect themselves from bitter things that might harm them like poisons. Is this the explanation that scientists give to many things that remain unexplained? “Just-so stories” 'what is the difference in coding of taste/smell from audition, vision, proprioception, and touch?