130 likes | 229 Views
The Evolution of Learning Algorithms for Artificial Neural Networks. Published 1992 in Complex Systems by Jonathan Baxter Michael Tauraso. Genetic Algorithm on NNs. Start with a population of neural networks. Find the fitness of each for a particular task Weed out the low-fitness ones
E N D
The Evolution of Learning Algorithms for Artificial Neural Networks Published 1992 in Complex Systems byJonathan Baxter Michael Tauraso
Genetic Algorithm on NNs • Start with a population of neural networks. • Find the fitness of each for a particular task • Weed out the low-fitness ones • Breed the high-fitness ones to make a new population. • Repeat.
Local Binary Neural Networks(LBNNs) • All weights, inputs, and outputs are binary. • Learning rule is a localized boolean function of two variables. • This vastly simplifies everything. • LBNNs are easy to encode into binary strings. • LBNNs are easy to write into genetic algorithms
Rules for LBNNs • Weights are +1, -1, or 0 • Nodes: ai(t+1) =sign( ∑ aj(t)wji(t) ) • Weights: wij(t+1) = f(ai(t), aj(t)) • Weights are classified as fixed or learnable. 0 weights are fixed.
Training Rules • Boolean functions of two variables • 16 possible varieties • Analog of Hebb’s rule given by:f(ai(t),aj(t)) = ai(t) aj(t)
Training Goal • Learn the 4 boolean functions of one variable • Identity, Inverse, Always 1, Always 0 • Who wants to learn the boolean functions of one variable anyway?
Fitness Determination • Start with an LBNN from the sample population • Clamp the output node to train for a particular boolean function. • Fitness is how well the network performs at calculating that boolean function after training.
Findings • Hebb’s rule is the most efficient learning rule. • LBNNs can be thought of as state machines
LBNNs as State Machines • Boolean functions are encoded as transitions between fixed points in the NN • Other transitions seek to push the network toward the appropriate fixed point.