1 / 28

Neural Network with Memory and Cognitive Functions

Neural Network with Memory and Cognitive Functions. Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. David D. Vogel Ross University School of Medicine, Commonwealth of Dominica. Organization. Motivation R-nets

apu
Download Presentation

Neural Network with Memory and Cognitive Functions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. David D. Vogel Ross University School of Medicine, Commonwealth of Dominica

  2. Organization • Motivation • R-nets • Statistical model of R-nets • Simulation results of statistical model • Simulation results of R-nets • Conclusions

  3. Motivation To design an intelligent machine capable of • Perception • Capacity to learn and memorize useful things • Spatio-temporal memories • Motor skills in relation to sensing and anticipation • Surviving in complex environment and adaptation • Value driven behavior (formulation of goals and their implementation) • Abstract thinking and action planning • Ability to communicate • Intuition and creativity • Consciousness

  4. Problems of Classical AI • Lack of robustness and generalization • No real-time processing • Central processing of information by a single processor • No natural interface to environment • No self-organization • Need to write software

  5. Intelligent Behavior • Emergent from interaction with environment • Based on large number of sparsely connected neurons • Asynchronous • Interact with environment through sensory-motor system • Value driven • Adaptive • Coordinated through agent’s • multiple sensory-motor modalities • constraints from morphology and materials • generation of correlations through physical process • basis for cross-modal associations From http://tokyolectures.org/

  6. Sparse Connectivity The brain is sparsely connected. (Unlike most neural nets.) A neuron in cortex may have on the order of 100,000 synapses. There are more than 1010 neurons in the brain. Fractional connectivity is very low: 0.001%. Implications: • Connections are expensive biologically since they take up space, use energy, and are hard to wire up correctly. • Therefore, connections are valuable. • The pattern of connection is under tight control. • Short local connections are cheaper than long ones. Our model makes extensive use of local connections.

  7. Human Brain at Birth 14 Years Old 6 Years Old Neuron Structure and Self-Organizing Principles

  8. Cortical Minicolumns “The basic unit of cortical operation is the minicolumn… It contains of the order of 80-100 neurons except in the primate striate cortex, where the number is more than doubled. The minicolumn measures of the order of 40-50 m in transverse diameter, separated from adjacent minicolumns by vertical, cell-sparse zones …“ (VB Mountcastle (2003). Introduction [to a special issue of Cerebral Cortex on columns]. Cerebral Cortex, 13, p. 2) Stain of cortex in planum temporale.

  9. R-nets • R-nets are defined as randomly connected artificial neural networks with primary and secondary neurons • They implement distributed memories able to recall input patterns • R-net with 106 excitatory neurons and brain-like connectivity will store at least 2x108 bits of information • R-nets have been used to construct networks with higher cognitive functions. • A statistical model of R-nets is presented and results are compared with simulated R-nets.

  10. Properties of R-nets • Biological plausibility • Similar to Hebbian networks • Sparsly connected • Large storage capacities • Use inhibition to prevent neurons not associated with a recalled pattern from firing • Store sequential patterns and attributes From Marr, D. (1971). Simple memory: a theory for archicortex. Philosophical Transaction of the Royal Society of London, B. 262, 23‑81.

  11. R-nets • During training, an input pattern is presented to the R-net by activating a selected cluster C of primary neurons. • All links between active primary neurons are trained. • During recall a subset of one of the stored patterns is presented to the input, activating corresponding primary neurons (initial recall set). • links from initial primary neuron 1 to destination primary neurons 1 and 2 are untrained. So is the link from initial primary neuron 2 to destination primary neuron 1. • The only trained link is from initial primary neuron 2 to destination primary neuron 2. • Both secondary neurons are strongly activated.

  12. R-nets • During recall, secondary neurons sum the weighted projections of active primary neurons ai,x represents the activity if the ith secondary neuron in the xth cycle; ae,x is the current activity of the eth primary neuron with possible values 0 or 1; Wi,e is the weight of the projection of the eth primary neuron onto the ith secondary neuron with possible values of 1 (untrained) or 10 (trained).

  13. if if and the projection of the ith secondary neuron is untrained if and the projection of the ith secondary neuron is trained R-nets • The primary neurons are synchronously updated. Ie,x is the inhibition of the eth primary neuron on the xth cycle, Ii,x-1 is a function of the ith secondary neuron, which is given by:

  14. Statistical Model of R-nets P-primary neurons S-secondary neurons Kp-primary neurons’ outgoing sets Ks-secondary neurons’ outgoing sets Set of primary neurons linked to ci

  15. Statistical Model of R-nets • The expected value of the number of different primary neurons reaching to (or reached from) a secondary neuron is • The expected value of the number of secondary neurons reached from (or reaching to) a primary neuron is • The number of primary neurons linked to a given primary neuron is

  16. Spurious neurons Spurious neuron • Is not a part of the original pattern but is activated during the recall process. • The probability that a potentially spurious neuron cj, is inhibited depends on the probability of an inhibitory link from an active primary neuron. • Alternatively, a neuron may be spurious if it is not linked to any activated neurons at all.

  17. Spurious neurons A neuron is spurious if it satisfies the following conditions: a) it has no projection from Swa b) all its projections from Ssa are trained. Ssa is the strongly activated set of secondary neurons and Swa is the weakly activated set of secondary neurons

  18. Eliminating spurious neurons • The probability that a secondary node y belongs to Sa is • The probability that a node in Sa is strongly activated is approximately • The probability that a potentially spurious node, z, is not linked to any node in Swa is

  19. Eliminating spurious neurons • The probability that a primary node z not linked to Swa is connected to a node in Ssa is • The probability that z is a spurious node is • The probability that the recall set has no more than Smax spurious neurons is

  20. Missing neurons • A missing neuron is a neuron from C-Crwhich is suppressed by an inhibitory projection to an activated primary neuron. • Lemma Each missing neuron is suppressed by an inhibitory link to a spurious neuron connected through a secondary node w, where w is different from all nodes in activated set Sa.

  21. Eliminating missing neurons • The probability that a given primary node will be missing due to a single spurious node can be estimated to be less than • The probability that a single primary neuron is missing is • The probability that the recall set has no more than Smax missing neurons is

  22. Results of the Statistical Model The statistical model is in a good agreement with simulated R-nets and can be applied to estimate the computational performance of very large R-nets. Storage capacities of R-nets with 4000 and 64,000 primary neurons

  23. Results of the Statistical Model Storage capacities of R-nets for 100 and 150 neurons patterns

  24. Results of the Statistical Model • Storage capacity of R-nets with 106 up to 107 primary neurons • 105 to 2x106 secondary neurons • 20 to 160 neuron patterns • Up to 1000 projections from each primary neuron to secondary neurons with • Up to 5000 projections from each secondary neuron back to primary neurons.

  25. Simulations of R-nets storage capacity for 1 cycle of recall of an R-net with 4000 primary neurons

  26. Iterative operation of R-nets Quick suppression of the spurious neurons Spurious neurons suppressed after 14 cycles

  27. Iterative operation of R-nets Oscillations in the recovery process Spurious neurons destroy the recovery process

  28. Conclusion • When the training sets are small, errors are due to the firing of spurious neurons with few links to the recall sets • When the training sets are large, errors are chiefly due to over-training • Storage capacity grows faster than the number of primary neurons with the slope of growth close to 10/7 on the logarithmic scale • R-net with 106 primary neurons can store more than 130,000 100-neuron sets • When R-net size reaches 109 primary neurons (with average number of projections 104 - similar to interconnection density of human brain), the network can store over 109 patterns of 150 neurons each

More Related