110 likes | 365 Views
The search for organizing principles of brain function. Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas Self-organization: Hebbian learning => feature-analyzing cells => cortical maps
E N D
The search for organizing principles of brain function • Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas • Self-organization: Hebbian learning => feature-analyzing cells => cortical maps • Information theory, a neural optimization principle, and applications • Prediction, control, and the “local cortical circuit” (LCC)
Self-organization • Pattern formation (Turing, 1952) from simple local rules (e.g., Hebb, 1949) • Hebb rule: When the firing of cell A contributes to that of cell B, increase the efficiency (synaptic strength) with which A excites B to fire. • An early puzzle: How does a layer of orientation-selective cells (Hubel & Wiesel, 1960-70s) form? • An early example of the power of Hebb learning: Hebb rule + short connections + locally-correlated random electrical activity, can => orientation-selective cells & their patterning in a layer (RL)
Self-organization in cortical models Orientation map (below; R Linsker, 1986) • Movie: J Sirosh, R Miikkulainen, & JA Bednar (UT Austin), 1996 [courtesy JA Bednar] • http://www.cs.utexas.edu/~nn/web-pubs/htmlbook96/sirosh/or_quad.mpg • Click for movie: or_quad.mov
Some higher-level properties that can result from Hebbian learning • Feature-analyzing (selective) cells. • “Infomax” principle (RL): Create a layer of cells whose outputs convey maximum (Shannon) information about its inputs, subject to biological constraints & costs (types of allowed proc’g, wiring length, energy cost, etc.). An optimal encoding principle. • Various uses of infomax • Models of neural learning & development • Qual’ve (RL, others) and quant’ve (Atick et al.) exp’tal agreement • Infomax-based ICA (independent component analysis) (Bell & Sejnowski, 1995): Reconstructs N statistically independent sources, given N linear combinations of them. • Nonlinear infomax is one way to generate “sparse representations.” Sparse coding used to reconstruct 3 speech sources given only the composite signal at each of 2 receivers (RL, 2001)
Sparse representation of mixture of sources freq. time
Labeling using a source signature freq. time Can obtain source signature from: - Relative transfer function (attenuation & phase shift at each frequency) from source to two rcvrs (used here). - Other methods: Pitch tracking; phoneme properties; can de-mix two overlapping sources using two received mixtures, etc. (None used here.)
Masking & reconstruction freq. time
Acoustic separation demo • Mixture of 3 stereo speech sources • Source 1: reconstruction & original • Source 2: reconstruction & original • Source 3: reconstruction & original
The “local cortical circuit” (LCC) • Substantial uniformity of cell org’n & connectivity across neocortical areas (Mountcastle) • Core functions of the LCC “module”? • A recurrent neural net that can combine “bottom-up” data and “top-down” expectations. LCC role in: forming generalizations? stabilizing feature analysis within each cortical processing area? Bayesian inference? • It’s long been clear that prediction, estimation, inference, & goal-directed motor control are important functions of mammalian brains. • Recent work (RL): A neural net alg’m for optimal Kalman estimation (pred’n) and control. The alg’m implies a set of constraints on the NN circuitry & signal flows. This architecture turns out to be similar to LCC.
Some other important unsolved problems • “Fast learning”: animals vs. neural nets • Learning causal relations: deterministic or statistical? Learning powerful invariances and the “right” representations. Is statistical learning over-emphasized? • Principles governing the processing, segregation, & integration of information streams (e.g., color, form, “what” & “where”)? • Common ground between perception & human concept formation: Learning similarity metrics that are useful for forming generalizations & for behavior. • How is information coded? (Firing rates, spike timing, place coding, synchrony & phase-locking, …?) • What representations are really used by the brain? Some surprises -- e.g., “change blindness” (R Rensink demo). • The “binding problem”; self-awareness & consciousness • Tools: How to probe circuit dynamics (of multiple interconnected cells) at fine spatial & temporal resolution?