250 likes | 268 Views
This study delves into the backbone structure of the Hopfield model, discussing patterns, localizing neuron damages, and the computational cost associated with neural network learning. The findings explore the implications of non-zero self-connections and the purpose of learning beyond memory stabilization.
E N D
ICANN 2006, Greece Backbone Structure of Hairy Memory Cheng-Yuan Liou Department of Computer Science and Information Engineering National Taiwan University
Discussions • Patterns in {N_i,p & N_i,n} are backbones of the Hopfield model. They form the backbone structure of the model. • Hairy model is a homeostatic system. • All four methods, et-AM, e-AM, g-AM, and b-AM, derive asymmetric weight matrices with nonzero diagonal elements and keep Hebb’s postulate. • In almost all of our simulations, the evolution of states converged in a single iteration (basin-1) during recall after learning. This is very different from the evolutionary recall process in many other models.
Discussions All three methods, et-AM, e-AM, and g-AM, operate in one shift. Each hyperplane is adjusted in turn. Each iteration improves the location of a single hyperplane. Each hyperplane is independent of all others during learning. Localizing neuron damages Localizing learning The computational cost is linearly proportional to the network size, N, and the number of patterns, P.
Discussions • All of the methods, et-AM, e-AM, g-AM and b-AM give non-zero values to the self-connections, wii \= 0, which is very different from Hopfield’s setting, wii = 0. • We are still attempting to understand and clarify the meaning of the setting wii = 0, where newborn neurons start learning from full self-reference, wii = 1, and end with whole network-reference, wii = 0. • This is beneficial for cultured neurons working as a whole. This implies that stabilizing memory might not be the only purpose of learning and evolution
Discussions • The Boltzmann machine can be designed according to et-AM, e-AM, or g-AM.