120 likes | 278 Views
The Infant Learning Grasping and Affordances (ILGA) Model: Brief Overview and Progress. Infant Learning to Grasp Model (ILGM). Current ILGA Design. ILGA Planned Capabilities. After training, the model should be able to perform grasps of objects with the appropriate Maximal finger aperture
E N D
The Infant Learning Grasping and Affordances (ILGA) Model: Brief Overview and Progress
ILGA Planned Capabilities • After training, the model should be able to perform grasps of objects with the appropriate • Maximal finger aperture • Wrist rotation • The model should be able to generate precision pinches of smaller objects • Cells in AIP should learn to respond to combinations of object features that afford grasping and project strongly to F5 neurons encoding motor parameters appropriate for grasping them • Reinforcement learning of connection weights between PFC and other regions should allow learning context-specific actions
Summary of Relevant Literature • Dorsal stream subdivided into dorsal-medial stream for reaching and dorsal-ventral stream for grasping (Rizzolatti & Matelli, 2003) • dorsal-medial stream - superior parietal and intraparietal regions and the dorsal premotor cortex • dorsal-ventral stream - inferior parietal and intraparietal regions and the ventral premotor cortex
Summary of Relevant Literature – Reaching Stream • Spatial dimensions of potential reach targets such as direction and distance are processed independently in parallel (Battaglia-Mayer et al., 2003). • direction and distance reach errors dissociate (Soechting & Flanders, 1989; Gordon et al., 1994) • distance information decays faster than direction information in working memory (McIntyre et al., 1998). • Lateral Intraparietal Sulcus (LIP) • disparity signals modulate activity in area LIP (Gnadt & Beyer, 1998; Ferraina et al., 2002) • encodes three-dimensional distance in an egocentric reference frame (Gnadt & Mays, 1995; Ferraina & Genovesio, 2001) • but LIP is also typically associated with saccades! • V6a / Medial Intraparietal Sulcus (MIP) • mostly visual (Galletti et al., 1997) modulated by somatosensory stimulation (Breveglieri et al,. 2002; Fattori et al,. 2005). • many cells modulated during contralateral arm movement (Galletti et al., 1997) and many only respond when the arm is directed toward a particular region of space (Galletti et al., 1997; Fattori et al., 2005). • lesions of the region result in misreaching with the contralateral arm (Battaglini et al., 2002, 2003) • F2 • two broad cell categories: movement-related cells - discharge on movement onset; signal- and set-related cells - anticipatory activity prior to movement (Kurata, 1994; Wise et al., 1997; Cisek & Kalaska, 2002). • rostro-caudal gradient of cell types - signal-related cells in the rostral portion of F2, set-related cells in an intermediate zone, and movement-related cells adjacent to F1 (Tanne et al., 1995; Johnson et al., 1996). • many cells in F2 are directionally tuned and their population activity encodes a vector representing the direction of arm movement to the end target (Weinrich & Wise, 1982; Caminiti et al., 1991)
Summary of Relevant Literature – Grasping Stream • Caudal Intraparietal Sulcus (cIPS) • receives input from V3a, whose neurons are sensitive to binocular disparity and have small, retinotopic receptive fields (Sakata H et al., 2005), and projects primarily to the anterior intraparietal sulcus (AIP) (Nakamura H et al., 2001). • two classes of neurons: surface orientation selective (SOS) and axis orientation selective (AOS) neurons • AOS cells are tuned to the 3D orientation of the longitudinal axis of long and thin stimuli (Sakata H and M Taira, 1994; Sakata H et al., 1997; Sakata H et al., 1999). • SOS cells are tuned to the surface orientation in depth of flat and broad objects (Sakata H et al., 1997; Sakata H et al., 1998; Sakata H et al., 1999; Shikata E et al., 1996). • Anterior Intraparietal Sulcus (AIP) • reciprocally connected with area F5 (Matelli et al., 1985; • mortor dominant neurons (40%) discharge equally well if the grasping movement is made either in the light, or in the dark. These cells are referred to as (Taira et al., 1990). • 50% of neurons fired almost exclusively during one type of grip, with precision grip being the most represented grip type (Taira et al., 1990; Sakata and Kusunoki, 1992; Murata et al., 1993). • visual dominant neurons - demonstrate object-specific activity (actually respond to several different objects, with varied activity levels) (Taira et al., 1990; Sakata and Kusunoki, 1992; Murata et al., 1993) • most neurons demonstrate phasic activity related to the motor behavior: set (key phase), preshape, enclose, hold (object phase), and ungrasp (Taira et al., 1990; Sakata and Kusunoki, 1992; Murata et al., 1993) • F5 • some cells respond to the observation of graspable objects (Rizzolatti et al., 1988; Murata et al., 1997) • cells are selective for different phases of a grasp, but can be active over multiple contiguous phases (Rizzolatti et al., 1988) • different classes of neurons discharge during different hand movements (grasping, holding, tearing, manipulating) and can be selective for either precision grip, finger prehension, or whole hand prehension (Rizzolatti et al., 1988)
Current Code • All code is available via SVN at: • svn://neuroinformatics.usc.edu/ilga/branch • The code is implemented in NSL using SCS: • http://neuroinformatics.usc.edu/mediawiki/index.php/NSL • For most major modules there is an associated test model to test its functionality in as much isolation as possible
Current Code – 3d Simulation Interface Modules • Modules • SimWorld – handles interaction between 3d world in Nsl3dCanvas and model perspective • ModelPerspectiveView – represents models view of world, handles interaction between SimWorld and model inputs • ArmHand – handles interaction between the model output and the simulated arm/hand object in the Nsl3dCanvas • GraspController – encloses different combinations of fingers • Models • TestArm – tests arm/hand PD controllers • TestReach – tests arm inverse kinematics and dynamic motor primitives • TestGrasp – tests grasp controller
Current Code – Neural Modules • Modules • SpikingNeuronBase – basic Izhikevich neural model • SpikingNeuron – Izhikevich model integrated with synaptic model • v1_1_1 – simple aggregated conductance synaptic model • v1_1_2 – more realistic synaptic model with individual synaptic conductances • FiringRateEstimator – calculates firing rate from spikes using discrete or alpha function kernel, or a leaky integrator • PoissonSpikeGenerator – generates spikes at a specified frequency according to a Poisson distribution • Test Models • TestSpikingNeuron • v1_1_ 1 – tests SpikingNeuron.v1_1_1 with random input • v1_1_2 – compares SpikingNeuron v1_1_1 and v1_1_2 with identical input • TestFiringRateEstimator – tests different firing rate estimation methods
Current Code – Population Modules • Modules • SpikingDNF1D – One-dimensional winner-take-all neural field • SpikingDNF2D – Two-dimensional winner-take all neural field • Population – generic population with a layer of pyramidal cells and one layer of inhibitory interneurons, connectivity can be set arbitrarily • FiringRateEstimator1D – vector of firing rate estimators • FiringRateEstimator2D – matrix of firing rate estimators • Pop1dDecoder – decodes one-dimensional population codes • Pop2dDecoder - decodes two-dimensional population codes • STDP1D – Spike timing-dependent plasticity modulated by dopamine (for vector of neurons) • STDP1Dto2D - Spike timing-dependent plasticity modulated by dopamine (for matrix of neurons receiving input from vector of neurons) • Models • TestSTDP • v1_1_1 – tests STDP1D • v1_1_2 – tests STDP1Dto2D
Current Code – Brain Region Modules • Modules • CIPS – caudal intraparietal sulcus • AOS – axis orientation selective cells – encode curvature, length and orientation of cylinders • SOS – surface orientation selective cells – encode size and orientations of flat surfaces • LIP – lateral intraparietal sulcus – encodes distance to object center in one-dimensional population code • VisualCortex • V6a – encodes object center direction in polar coordinates in two-dimensional population code • V4 – encodes visual stimuli in coarse population code, red, green, and blue-selective subpopulations • F6 – detects GO signals • Striatum – receives excitatory input, produces inhibitory output • GP – receives inhibitory input, produces tonic inhibition • PFC – encodes task/context • F2 – selects reach targe • F5 –selects wrist orientation • MotorCortex – extracts motor parameters from premotor population activity, next state reach planner • NextStatePlanner – dynamic motor primitives for reaching • Models • TestCIPS – tests CIPS module, reproducing some of the Sakata group’s data • TestV6A – tests V6A module, reproducing some of the Galletti group’s data • TestF2 – tests reach target learning in F2 module (should reproduce Cisek & Kalaska’s data) • TestF5 – tests wrist orientation learning in F5 module (should reproduce Parma group’s data)