1 / 22

Artificial Neural Networks

Artificial Neural Networks. Michael Prestia COT 4810 March 18, 2008. The Human Brain. Composed of neurons Neurons send signals to each other. Neurons. Neurons store and transmit information Neurons send messages to one another through a synapse.

myra
Download Presentation

Artificial Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Neural Networks Michael Prestia COT 4810 March 18, 2008

  2. The Human Brain • Composed of neurons • Neurons send signals to each other

  3. Neurons • Neurons store and transmit information • Neurons send messages to one another through a synapse http://img460.imageshack.us/img460/8744/neuron6ri.gif

  4. Artificial Neural Networks • ANNs simulate neurons to create an artificial brain • The symbol for a synapse in an ANN is a connecting line Anatomical Sketch ANN representation

  5. Neural Networks • The brain takes inputs (senses) and produces outputs (actions) • ANNs act in the same fashion Output Output Input Input

  6. Example • A driving simulator can take in sensors as input • The output will represent which direction to move Outputs (effectors/controls) Forward Left Right Front Left Right Back Inputs (Sensors)

  7. Composition of an ANN • Connections between neurons are called weights • Linearly separable problems do not require a hidden layer Output Layer Hidden Layer Input Layer

  8. How Does an ANN Work? out1 out2 • Activation Neuron j activation: H1 H2 w11 w22 w21 w12 X1 X2

  9. Recurrent Connections • Allow feedback • Represents a type of memory out Wout-H wH-out H w11 w21 X1 X2

  10. Training an Neural Network • Targets can be either known or unknown • Different types of training • Target known • Hebbian Learning • Perceptron Learning • Backpropogation • Target unknown • Neuroevolution

  11. Hebbian and Perceptron Learning • Hebbian Learning • Works best when output is independent of input • Simple Hebbian Rule: wi(new) = wi(old) + xiy • Perceptron Learning • changes to match target • wi(new) = wi(old) + αtxi

  12. Backpropagation • Designed for at least one hidden layer • General idea • Let activation propagate to outputs • Calculate and assign error values • Adjust weights • Sigmoid activation function is common

  13. Backpropagation (cont.) • 5 steps • Calculate error at outputs • Ek = (tk – ok) × ok(1-ok) • Adjust weights going into output layer • Wjk += L × Ek × oj • Calculate error at hidden nodes • Ej = ok(1-ok) × • Adjust weights going into hidden layer • Wij += L × Ej × oi • Repeat

  14. Disadvantage of Backprop • Easy to get stuck in local optima http://content.answers.com/main/content/wp/en/6/67/Fitness-landscape-cartoon.png

  15. Applications of Backprop • Diagnose medical conditions based on past examples • Learn mouse gesture recognition • Learn to control anything by example

  16. Neuroevolution • Uses a genetic algorithm to evolve the weights in a neural network • Genome is direct encoding of weights • Weights are optimized for the given task

  17. Code Example

  18. Disadvantage of Neuroevolution • Competing Conventions Problem 3! = 6 different representations of the same network A C A B C A B C A B C B A B C B A C

  19. Other Types of Neuroevolution • Topology and Weight Evolving Artificial Neural Networks (TWEANNS) • NeuroEvolution of Augmenting Topologies (NEAT)

  20. Applications of Neuroevolution • Factory optimization • Game playing (Go, Tic-tac-toe) • Visual recognition • Video Games

  21. References • Dewdney, A.K. The New Turing Omnibus. New York: Henry Hold and Company, 1993. • Buckland, Mat. AI Techniques for Game Programming. Cincinnati: Premier Press, 2002. • Machine Learning I: Michael Georgiopoulos • AI for Game Programming: Kenneth Stanley • Forza Motorsport 2 information from Official Xbox Magazine • Neuron image from http://img460.imageshack.us/img460/8744/neuron6ri.gif • Local optima image from http://content.answers.com/main/content/wp/en/6/67/Fitness-landscape-cartoon.png • All other images copied with permission from http://www.cs.ucf.edu/~kstanley/cap4932spring08dir/CAP4932_lecture9.ppt and http://www.cs.ucf.edu/~kstanley/cap4932spring08dir/CAP4932_lecture12.ppt • Forza Motorsport 2 video from http://media.xbox360.ign.com/media/743/743956/vids_1.html

  22. Homework Questions • What types of problems do not require a hidden layer? • What are two different methods for training neural networks and how do they differ?

More Related