1 / 17

2 Spike Coding

Explore the relationship between spikes and information encoding/decoding using Bayesian brain models and adaptive spike coding methods. Understand how spike sequences convey information and the limitations of linear and cascade models. Discover how covariance methods can be applied to find multiple features and assess decoding accuracy.

samuely
Download Presentation

2 Spike Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. [Bayesian Brain] 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB)

  2. Spike Coding • Spikes information • Single • Sequences • Spike encoding • Cascade model • Covariance Method • Spike decoding • Adaptive spike coding

  3. Spikes: What kind of Code?

  4. Spikes: Timing and Information • Entropy • Mutual Information • S: stimulus, R: response • Total Entropy Noise Entropy

  5. Spikes: Information in Single Spikes • Spike (r=1) • No spike (r=0) • Noise Entropy • Information • Information per spike

  6. Spikes: Information in Spike Sequences (1) • A spike train and its representation in terms of binary “letters.” • N bins : N-letter binary words, w. P(w) P(w|s(t))

  7. Spikes: Information in Spike Sequences (2) • Two parameters • dt: bin width • L=N*dtTotal : duration of the word • The issue of finite sampling poses something of a problem for information-theoretic approaches Information rate

  8. Encoding and Decoding : Linear Decoding • Optimal linear kernel K(t) • Crs : spike-triggered average (STA) • Css : autocorrelation • Using white noise stimulus

  9. Encoding and Decoding: Cascade Models • Cascade Models • Decision function EX) • Two principal weakness • It is limited to only one linear feature • The model as a predictor for neural output is that it generate only a time-varying probability, or rate. • Poisson spike train (Every spike is independent.)

  10. Encoding and Decoding: Cascade Models • Modified cascade model • Integrate-and-fire model

  11. Encoding and Decoding: Finding Multiple Features • Spike-triggered covariance matrix • Eigenvalue decomposition of : • Irrelevant dimensions : eigenvalues close to zero • Relevant dimensions : variance either less than the prior or greater. • Principal component analysis (PCA)

  12. Examples of the Application of Covariance Methods (1) • Neural Model • Second filter • Two significant modes(negative) • STA is linear combination of f and f’. • Noise effect • Spike interdependence

  13. Examples of the Application of Covariance Methods (2) • Leaky integrate-and-fire neuron (LIF) • C: capacitance, R: resistance, Vc: theshold, V: membrane potential • Causal exponential kernel • Low limit of integration

  14. Examples of the Application of Covariance Methods (3) Reverse correlation • How change in the neuron’s biophysics • Nucleus magnocellularis(NM) • DTX effect

  15. Using Information to Assess Decoding • Decoding : to what extent has one captured what is relevant about the stimulus? • Use Bayse rule • N-dimensional model • Single-spike information • 1D STA-based model recovers ~ 63%, • 2D model recovers ~75%.

  16. Adaptive Spike Coding (1) • Adaptation (cat’s toepad) • Fly large monopolar cells

  17. Adaptive Spike Coding (2) • Although the firing rate is changing, we can use a variant of the information methods. • White noise stimulus • Standard deviation Input/output relation

More Related