90 likes | 243 Views
Reference Frames and Internal Models. Two main questions:. How does the nervous system encode the position and velocity of the ball and the hand?. How does the nervous system predict the movement of the ball and the forces required to catch it?.
E N D
Reference Frames and Internal Models Two main questions: How does the nervous system encode the position and velocity of the ball and the hand? How does the nervous system predict the movement of the ball and the forces required to catch it? Two protocols: Virtual Grasping and Virtual Interception
Intrinsic sensor coordinates • retinal • vestibular • joints and muscles • Multisensory coordinate frames • Egocentric • viewer-centered • arm-centered • hand-centered • Allocentric • gravity • horizon • objects Reference Frames for Visuomotor Coordination How does the brain encode the position of objects in space?
Gravity and Visual Orientation The orientation of visual objects is remembered with respect to gravity: Astronauts may have difficulties knowing the orientation of objects around them.
Gravity affects Limb Proprioception Humans more easily remember the orientation of their limb with respect to the horizontal plane: mg Astronauts may have difficulties knowing where their limbs are in space.
a impact EMG 0 -300 Time (ms) 1g 0g Gravity and Prediction The accelerating effects of gravity are expected even when gravity is not there : Astronauts may have difficulties interacting with moving objects in space.
Two experimental protocols: • Virtual Interception Intercept an object moving through space with the hand. Moving object is projected from above or below. Use different postures to examine what information determines the “up” and “down” for the timing of the response.
Two experimental protocols: Virtual Grasping Align a tool held in the hand with a complex visual object. Align the tool with the remembered orientation of the object. Introduce sensory conflicts during the memory delay period to test which sensory modalities influence the remembered orientation of the object.
Experiment Requirements • Control the visual environment • Immersive display • Low latency update to provide realism • Control haptic cues (pressure, contact, force) • Restrained positions giving stable contact with station • Unconstrained (freefloating) positions to remove haptic cues • Simulation of the force of gravity on the body • Induce sensory conflicts • Tracking system to control gain between head movements and apparent motion in the visual scene • Measure responses • ~1-2 mm resolution for head and hand movements • ~1 msec resolution for detection of movement onset
Hardware Requirements • Head-Mounted Visual Display • Immersive, 3D • Opto-Inertial Tracker • Stable measurements (unlike inertial alone) • Insensitive to temporary occlusions (unlike optical alone) • Predictive sensing to overcome sensor-video lag • Restraint chair • Provides stable contact with the station • Constant-force springs & Vest • Simulates the force of gravity on the body • Restraint pole • Allows quasi-freefloating to remove all haptic cues • Paddle with InertiaCube, IRED markers and accelerometer • Real-time tracking (InertiaCube & Markers) • High-resolution timing (Accelerometer)