320 likes | 841 Views
Artificial Neural Networks. Michael Prestia COT 4810 March 18, 2008. The Human Brain. Composed of neurons Neurons send signals to each other. Neurons. Neurons store and transmit information Neurons send messages to one another through a synapse.
E N D
Artificial Neural Networks Michael Prestia COT 4810 March 18, 2008
The Human Brain • Composed of neurons • Neurons send signals to each other
Neurons • Neurons store and transmit information • Neurons send messages to one another through a synapse http://img460.imageshack.us/img460/8744/neuron6ri.gif
Artificial Neural Networks • ANNs simulate neurons to create an artificial brain • The symbol for a synapse in an ANN is a connecting line Anatomical Sketch ANN representation
Neural Networks • The brain takes inputs (senses) and produces outputs (actions) • ANNs act in the same fashion Output Output Input Input
Example • A driving simulator can take in sensors as input • The output will represent which direction to move Outputs (effectors/controls) Forward Left Right Front Left Right Back Inputs (Sensors)
Composition of an ANN • Connections between neurons are called weights • Linearly separable problems do not require a hidden layer Output Layer Hidden Layer Input Layer
How Does an ANN Work? out1 out2 • Activation Neuron j activation: H1 H2 w11 w22 w21 w12 X1 X2
Recurrent Connections • Allow feedback • Represents a type of memory out Wout-H wH-out H w11 w21 X1 X2
Training an Neural Network • Targets can be either known or unknown • Different types of training • Target known • Hebbian Learning • Perceptron Learning • Backpropogation • Target unknown • Neuroevolution
Hebbian and Perceptron Learning • Hebbian Learning • Works best when output is independent of input • Simple Hebbian Rule: wi(new) = wi(old) + xiy • Perceptron Learning • changes to match target • wi(new) = wi(old) + αtxi
Backpropagation • Designed for at least one hidden layer • General idea • Let activation propagate to outputs • Calculate and assign error values • Adjust weights • Sigmoid activation function is common
Backpropagation (cont.) • 5 steps • Calculate error at outputs • Ek = (tk – ok) × ok(1-ok) • Adjust weights going into output layer • Wjk += L × Ek × oj • Calculate error at hidden nodes • Ej = ok(1-ok) × • Adjust weights going into hidden layer • Wij += L × Ej × oi • Repeat
Disadvantage of Backprop • Easy to get stuck in local optima http://content.answers.com/main/content/wp/en/6/67/Fitness-landscape-cartoon.png
Applications of Backprop • Diagnose medical conditions based on past examples • Learn mouse gesture recognition • Learn to control anything by example
Neuroevolution • Uses a genetic algorithm to evolve the weights in a neural network • Genome is direct encoding of weights • Weights are optimized for the given task
Disadvantage of Neuroevolution • Competing Conventions Problem 3! = 6 different representations of the same network A C A B C A B C A B C B A B C B A C
Other Types of Neuroevolution • Topology and Weight Evolving Artificial Neural Networks (TWEANNS) • NeuroEvolution of Augmenting Topologies (NEAT)
Applications of Neuroevolution • Factory optimization • Game playing (Go, Tic-tac-toe) • Visual recognition • Video Games
References • Dewdney, A.K. The New Turing Omnibus. New York: Henry Hold and Company, 1993. • Buckland, Mat. AI Techniques for Game Programming. Cincinnati: Premier Press, 2002. • Machine Learning I: Michael Georgiopoulos • AI for Game Programming: Kenneth Stanley • Forza Motorsport 2 information from Official Xbox Magazine • Neuron image from http://img460.imageshack.us/img460/8744/neuron6ri.gif • Local optima image from http://content.answers.com/main/content/wp/en/6/67/Fitness-landscape-cartoon.png • All other images copied with permission from http://www.cs.ucf.edu/~kstanley/cap4932spring08dir/CAP4932_lecture9.ppt and http://www.cs.ucf.edu/~kstanley/cap4932spring08dir/CAP4932_lecture12.ppt • Forza Motorsport 2 video from http://media.xbox360.ign.com/media/743/743956/vids_1.html
Homework Questions • What types of problems do not require a hidden layer? • What are two different methods for training neural networks and how do they differ?