460 likes | 492 Views
Bayesian Inference of Neural Activity and Connectivity from All-Optical Interrogation of a Neural Circuit. Ionatan J. Kuperwajs Howard Hughes Medical Institute Janelia Research Campus, Turaga Lab. Talk Outline. Problem + Dataset, VI Model Framework Recognition + Generative Models
E N D
Bayesian Inference of Neural Activity and Connectivity from All-Optical Interrogation of a Neural Circuit Ionatan J. Kuperwajs Howard Hughes Medical Institute Janelia Research Campus, Turaga Lab
Talk Outline • Problem + Dataset, VI • Model Framework • Recognition + Generative Models • Additional Considerations • Results + Conclusions • Spike Inference • Connectivity • Next Steps • Model Improvements • Applicability
Problem +Project Goals • All-optical techniques • 2-photon optogenetics → drive spikes in neurons • Calcium imaging → recording of fluorescence at cellular resolution • Goal: infer circuit mapping + individual neuron spiking activity • General methodology • Global Bayesian inference strategy • Jointly infer distribution over spikes and unknown connections • Why can we do this? • GPU computing • Automatic differentiation libraries (i.e. Tensorflow, Theano) • Variational autoencoders – probabilistic Bayesian inference + deep learning Simultaneous + in vivo
Packer Optogenetic Data • Mouse data: awake, headfixed, on treadmill • Excitatory neurons (layer 2/3 of V1) • Spatial light modulator for 2-photon excitation of C1V1 opsin in subset of neurons • Simultaneous imaging of neural activity by 2-photon calcium imaging of GCaMP6s • Twice per second, stimulated five random cells and observed activity in rest of network • Facilitates discovery of a large portion of connections within the field of view • 7200 trials for one hour • Recording of spontaneous data for 40 minutes
Variational Inference Bayes’ Rule posterior z latent variables prior likelihood generative model joint x data
Variational Inference Bayes’ Rule posterior z latent variables prior likelihood generative model joint x data Model Evidence Log likelihood function where some parameters have been marginalized • Computationally intractable • Hindered application of Bayesian techniques to large-scale data VI solves this!
Variational Inference Bayes’ Rule Recognition Model VI + DNN, create posterior Q(z|x) intended to approximate P(z|x) posterior z latent variables prior likelihood generative model • Optimizing this bound improves recognition model • If bound is tight, Q(z|x) = P(z|x) joint x data Model Evidence Log likelihood function where some parameters have been marginalized • Computationally intractable • Hindered application of Bayesian techniques to large-scale data VI solves this!
Recognition Model Key Equations Factorize the approximate posterior over the spikes
Recognition Model Key Equations Factorize the approximate posterior over the spikes v(t) depends on the fluorescence trace and optogenetic input Neural network that takes a window of traces from t - T to t + T and nonlinearly maps as follows: 20 features 2 standard NN layers single value
Framework Overview of the Generative Model Part 1: Generate spikes at time step t Time Step t - 1 Previous spikes Inputs from the rest of the brain External photostimulation
Framework Overview of the Generative Model Part 1: Generate spikes at time step t Time Step t - 1 Time Step t Previous spikes Inputs from the rest of the brain Current spikes External photostimulation
Framework Overview of the Generative Model Part 1: Generate spikes at time step t Part 2: Generate calcium fluorescence signals from the spikes Time Step t - 1 Time Step t Previous spikes 20 Ca Calcium Inputs from the rest of the brain Current spikes Observed calcium fluorescence signal External photostimulation
Generative Model Key Equations Generative Model ) Fluorescence signals at time t Reconstruction with Gaussian noise Learned, diagonal covariance matrix
Generative Model Key Equations Generative Model ) Fluorescence signals at time t Reconstruction with Gaussian noise Learned, diagonal covariance matrix Reconstruction Additive offset Optogenetic stimulation Convolution kernel Spikes Diagonal scaling matrix Convolution Kernel (unique to each cell)
Generative Model Key Equations Generative Model ) Spiking Depends on the Input Dynamical System Generates Low-Dimensional Input Representing Activity in the Rest of the Brain ) Fluorescence signals at time t Reconstruction with Gaussian noise Learned, diagonal covariance matrix Low-dimensional latents Weighted matrices Reconstruction Normalized, exponentially decaying kernel Additive offset External photostimulation Learned offset Optogenetic stimulation Convolution kernel Spikes Diagonal scaling matrix Convolution Kernel (unique to each cell)
Structured Priors on the GLM Weights Little data to support inferences on GLM weights. Must specify a prior and approximate posterior Sparse Model Dense Model
Structured Priors on the GLM Weights Little data to support inferences on GLM weights. Must specify a prior and approximate posterior Sparse Model Dense Model • Set p = 0.1 based on prior information • Optimized • Dramatic improvements over dense GLM • Biological connectivity is also sparse
Structured Priors on the GLM Weights Little data to support inferences on GLM weights. Must specify a prior and approximate posterior Sparse Model Dense Model • Simpler model • Optimized • Little benefit over a model with only low-rank activity • Set p = 0.1 based on prior information • Optimized • Dramatic improvements over dense GLM • Biological connectivity is also sparse
Modeling Off-Target Photostimulation Photostimulation may directly excite off-target neurons. Use a sum of five Gaussians with different scales, , to flexibly model distance-dependent stimulation.
Modeling Off-Target Photostimulation Photostimulation may directly excite off-target neurons. Use a sum of five Gaussians with different scales, , to flexibly model distance-dependent stimulation. This allows for stimulation to take on an elliptic pattern with a shifted center. This gives a similar spatial distribution to perturbation-triggered activity after inference.
Modeling Off-Target Photostimulation Photostimulation may directly excite off-target neurons. Use a sum of five Gaussians with different scales, , to flexibly model distance-dependent stimulation. This allows for stimulation to take on an elliptic pattern with a shifted center. This gives a similar spatial distribution to perturbation-triggered activity after inference.
Spike Inference for Spontaneous and Perturbed Data Spontaneous Data
Spike Inference for Spontaneous and Perturbed Data Spontaneous Data Perturbed Data • Noisy fluorescence trace • Denoised reconstructions • Inferred probability
Spike Inference for Spontaneous and Perturbed Data Spontaneous Data Perturbed Data • Noisy fluorescence trace • Denoised reconstructions • Inferred probability Stim-Triggered Averages
Spike Inference for Spontaneous and Perturbed Data Spontaneous Data Perturbed Data • Noisy fluorescence trace • Denoised reconstructions • Inferred probability • Directly stimulated cells • Large increase, slow decay driven by spiking activity • Modeled by inferred spikes Stim-Triggered Averages
Connectivity Predictions and Correlations Spike-and-Slab Probability Distributions Connection Strength Connection Probability
Connectivity Predictions and Correlations Correlation Coefficients Spike-and-Slab Probability Distributions Connection Strength Raw Calcium Traces Reconstructed Traces Connection Probability
Synaptic Connectivity Matrices Strength of Connection Probability of Connection
Synaptic Connectivity Matrices Strength of Connection Probability of Connection Combined Matrix
Synaptic Connectivity Matrices Strength of Connection Probability of Connection Combined Matrix Inferred Connectivity of an Individual Neuron
Synaptic Connectivity Matrices Strength of Connection Probability of Connection Combined Matrix Find neurons with highly correlated input profiles Inferred Connectivity of an Individual Neuron Neurons 6 + 35
Neurons with Similar Input Profiles Have Similar Spiking Patterns Neuron 63 Raw Traces Generated Reconstruction Optogenetic Perturbations
Neurons with Similar Input Profiles Have Similar Spiking Patterns Neuron 63 Raw Traces Generated Reconstruction Neuron 6 Optogenetic Perturbations Neuron 35
Confirming Synapse Predictions + Need for Sparser Inferences Reconstructions
Confirming Synapse Predictions + Need for Sparser Inferences Reconstructions
Confirming Synapse Predictions + Need for Sparser Inferences Reconstructions
Confirming Synapse Predictions +Need for Sparser Inferences Reconstructions Neuron 63 Neuron 35 Generated Reconstruction Raw Traces
Preliminary Conclusions • The data shows evidence of sparse, as opposed to dense, connectivity • Sparse GLM vs Dense GLM • Inferred sparse weights are consistent with known properties of neural circuits • Short-range connections are predominantly excitatory • Weights positively correlated with spontaneous correlation • Perturbed data supports stronger inferences than spontaneous data • Spike-and-slab posterior over weights • Perturbations increase the number of discovered connections • Joint inference is better than a pipeline • Allow low-rank activity, GLM connectivity, and external stimulation to influence spike inference This is the first fully Bayesian model of calcium imaging designed for perturbation data that is able to extract posteriors over a wide range of parameters with such efficiency
Improving the Model Upsampling Gumbel Method Expand Convolutional NN Nonlinearity
Improving the Model Upsampling Gumbel Method Expand Convolutional NN Nonlinearity Increase resolution of spike probabilities Calcium Traces Perturbations linear interpolation upsample factor of 2 interpolate with zeros upsample factor of 2
Improving the Model Upsampling Gumbel Method Expand Convolutional NN Nonlinearity Increase resolution of spike probabilities Replace analytical sampling Calcium Traces Perturbations • Can implement this with a Relaxed Bernoulli • Extends to Poisson distributions and nonlinear models relatively easily • Doesn’t extend to multi-sample objectives and VIMCO linear interpolation upsample factor of 2 interpolate with zeros upsample factor of 2 Sample from a Logistic distribution instead of a Normal/Gaussian Propagate through a sigmoid
Improving the Model More complex MLP Upsampling Gumbel Method decrease number of features per layer Expand Convolutional NN Nonlinearity increase number of hidden layers Increase resolution of spike probabilities Replace analytical sampling Calcium Traces Perturbations • Can implement this with a Relaxed Bernoulli • Extends to Poisson distributions and nonlinear models relatively easily • Doesn’t extend to multi-sample objectives and VIMCO linear interpolation upsample factor of 2 interpolate with zeros upsample factor of 2 Sample from a Logistic distribution instead of a Normal/Gaussian Propagate through a sigmoid
Improving the Model Nonlinear Generative Model More complex MLP Upsampling Gumbel Method • Biophysical properties of calcium in neurons • Linear: instant rise, exponential decay • Nonlinear: Incorporate intracellular dynamics • Possible with Gumbel • Ideally gives a better fit for data (this one might be too noisy) decrease number of features per layer Expand Convolutional NN Nonlinearity increase number of hidden layers Increase resolution of spike probabilities Replace analytical sampling Calcium Traces Perturbations • Can implement this with a Relaxed Bernoulli • Extends to Poisson distributions and nonlinear models relatively easily • Doesn’t extend to multi-sample objectives and VIMCO linear interpolation upsample factor of 2 interpolate with zeros upsample factor of 2 Sample from a Logistic distribution instead of a Normal/Gaussian Propagate through a sigmoid
Adaptation to Other Datasets • Refactor model to plug-and-play different spike-inference and fitting methods • Optogenetic datasets: • Synaptic blockers experiment (Packer/Hausser lab) • 10 cell stimulation experiment (mouse S1 cortex) • Dickson lab (fly fruitless circuit) • Svoboda lab (mouse ALM cortex) • Purely observational datasets: • Mrsic-Flogel lab (mouse V1 + ground truth synapses)
References Aitchison, L., Russell, L., Packer, A., Yan, J., Castonguay, P., Häusser, M., & Turaga, S. Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit.NIPS (2017, under review). Jang, E., Gu, S., & Poole, B. Categorical Reparameterization with Gumbel-Softmax. https://arxiv.org/abs/1611.01144 (2017). JJ Allaire, Dirk Eddelbuettel, Nick Golding, &Yuan Tang. tensorflow: R Interface to TensorFlow. https://github.com/rstudio/tensorflow (2016) Maddison, C. J., Mnih, A., & Teh, Y. W. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. https://arxiv.org/abs/1611.00712 (2017). Speiser, A., Yan, J., Archer, E., Buesing, L., Turaga, S., & Macke J.H. Fast amortized inference of neural activity from calcium imaging data with variational autoencoders. NIPS (2017, under review). Packer, A.M., Russell, L.E., Dalgleish, H.W., & Häusser, M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nature Methods. vol. 12, no. 2, pp. 140–146, (2015). Packer, A.M., Roska, B. & Hausser, M. Targeting neurons and photons for optogenetics. Nat. Neurosci. 16, 805– 815 (2013).
Acknowledgements Erik Snapp Srini Turaga HHMI, Janelia Jinyao Yan Laurence Aitchison JUS Program Adam Packer Thomas Mrsic-Flogel