390 likes | 549 Views
Messy nice stuff & Nice messy stuff. Omri Barak. Collaborators: Larry Abbott David Sussillo Misha Tsodyks. Sloan-Swartz July 12, 2011. Neural representation. Representation of task parameters by neural population. We know that large populations of neurons are involved.
E N D
Messy nice stuff&Nice messy stuff Omri Barak Collaborators: Larry Abbott David Sussillo Misha Tsodyks Sloan-Swartz July 12, 2011
Neural representation • Representation of task parameters by neural population. • We know that large populations of neurons are involved. • Yet we look for and are inspired by impressive single neurons. • Case study: Delayed vibrotactile discrimination (from Ranulfo Romo’s lab)
f1 f2 time (sec) Romo & Salinas, Nat Neurosci Rev, 2003
f1 f2 time (sec) f1>f2? Y N Romo & Salinas, Nat Neurosci Rev, 2003
Romo task • Encoding of analog variable • Memory of analog variable • Arithmetic operation “f1-f2”
Romo, Brody, Hernandez, Lemus. Nature 1999 Machens, Romo, Brody. Science 2005
Striking tuning properties • Lead to “simple / low dimensional” models • “Typical” neurons are used to define model populations.
Existing models Miller et al. 2006 Barak et al. 2010 Machens et al. 2005 Not shown: Verguts Deco Singh and Eliasmith 2006 Miller et al. 2003
40 10 Hz 22 Hz 20 34 Hz 0 35% of the neurons flip their sign prestim -1.5 0 0.5 1.5 2.5 3.5 4 Time (sec) 3 2 1 Delay tuning 0 -1 -2 -3 -6 -4 -2 0 2 4 6 8 Stimulus tuning But… Are all cells that good? Brody et al. 2003 Jun et al. 2010 Barak et al. 2010
Echo state network Jaeger 2001 Maass et al 2002 Buonomano and Merzenich 1995
Echo state network + Noise 2 1 1.5 0.5 N = 1000 / 2000 K = 100 (sparseness) g = 1.5 r 1 0 0.5 -0.5 0 -1 -2 0 2 4 -4 -2 0 2 4 x x
Implementing the Romo task f1 f2 r f Sussillo and Abbott 2009 Jaeger and Haas 2004
Input (f1,f2) Output
Input (f1,f2) Output Unit activity
It works, but… • How does it work? • After the training, we have a network that is almost a black box. • Relation to experimental data.
Hypothesis • Consider the state of the network in 1000-D as the trial evolves
time (sec) f2 f1
Hypothesis • Focus only at the end of the 2nd stimulus. • For each (f1,f2) pair, there is a point in 1000-D space.
Hypothesis • Focus only at the end of the 2nd stimulus. • For each (f1,f2) pair, there is a point in 1000-D space. • So there is a 2D manifold in the 1000-D space. • Can the dynamics (after learning) draw a line through this manifold?
Dynamics or just fancy readout? • The two responses are different in network activity, not just through the particular readout we chose. Distance in state space
Searching for a saddle in 1000D Vector function: Scalar function:
Number of unstable eigenvalues Number of unstable eigenvalues Norm of fixed point 1 Distance along trajectory
Slightly more realistic • Positive firing rates • Avoid fixed point between trials. • Introduce reset signal. • Chaotic activity in delay period = 0
Nice persistent neurons Activity Time
a1-a2 plane f2 tuning f1 tuning Romo and Salinas 2003
Problems / predictions • Reset signal • Generalization
There is a reset (Barak et al 2010, Churchland et al) There is no reset, and performance shows it (Buonomano et al 2007) Reset Correlation between trials with different frequencies Correlation Time (sec)
Generalization • Interpolation vs. Extrapolation f2 f1
Generalization • Interpolation vs. Extrapolation f2 f1
Generalization • Interpolation vs. Extrapolation f2 f1
Extrapolation Delosh et al 1997
Conclusions • Response properties of individual neurons can be misleading. • An echo state network can solve decision making tasks. • Dynamical systems analysis can reveal function of echo state networks. • Need to find a middle ground.