1 / 65

Neural Nets

Neural Nets. Symbolic and sub-symbolic artificial intelligence. The various conventional knowledge representation techniques that have been mentioned so far can be labelled symbolic artificial intelligence . . Symbolic and sub-symbolic artificial intelligence.

lindsey
Download Presentation

Neural Nets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Nets

  2. Symbolic and sub-symbolic artificial intelligence • The various conventional knowledge representation techniques that have been mentioned so far can be labelled symbolic artificial intelligence.

  3. Symbolic and sub-symbolic artificial intelligence • The elements in the knowledge representation - production rules, frames, semantic net nodes and arcs, objects, or whatever - act as symbols, with each element corresponding to a similar element in the real-world knowledge. • Manipulations of these elements correspond to the manipulation of elements of real-world knowledge.

  4. Symbolic and sub-symbolic artificial intelligence • An alternative set of approaches, which have recently become popular, are known as sub-symbolic AI.

  5. Symbolic and sub-symbolic artificial intelligence • Here, the real-world knowledge is dispersed among the various elements of the representation • Only by operating on the representation as a whole can you retrieve or change the knowledge it contains.

  6. Symbolic and sub-symbolic artificial intelligence • The two main branches of sub-symbolic AI are • neural nets (also known as neural networks, or ANNs, standing for artificial neural nets) and • genetic algorithms.

  7. Symbolic and sub-symbolic artificial intelligence • The term connectionismis used to mean roughly the same as the study of neural nets.

  8. The biological inspiration for artificial neural nets • Neural networks are an attempt to mimic the reasoning, or information processing, to be found in the nervous tissue of humans and other animals.

  9. The biological inspiration for artificial neural nets • Such nervous tissue consists of large numbers (perhaps 100 billion in a typical human brain) of neurons (nerve cells), connected together by fibres called axons and dendrites to form networks, which process nerve impulses.

  10. Apical dendrites A cluster of neurons, showing how the axon from one connects to the dendrites of others. Basal dendrites One sort of neuron - a pyramidal cell Synapses Cell body Axon

  11. The biological inspiration for artificial neural nets • The neuron acts as a signal processing device: • the dendrites act as inputs, • the axon acts as an output, • a connection between one of these fibres and an adjacent cell - known as a synapse - may be inhibitory or excitatory, i.e. may tend to cause the next cell to 'fire', or tend to stop it 'firing'.

  12. The biological inspiration for artificial neural nets • Obviously, neurones are extremely small, and made of living tissue rather than the metals and other inorganic substances that make up electronic circuits.

  13. The biological inspiration for artificial neural nets • The signals that pass along nerve fibres are electrochemical in nature, unlike the electrical signals in a computer. • The synapses which connect one neuron to another use chemicals -neurotransmitters - to transmit signals.

  14. The biological inspiration for artificial neural nets • Drugs which affect the brain typically do so by altering the chemistry of the synapses, making the synapses for a whole group of neurons either more efficient or less efficient.

  15. The biological inspiration for artificial neural nets • As a result, there are some important differences between neurons, and the individual processing elements in computers (transistor-based switches): • Neurons are far slower than artificial neurons - neurodes - but far more efficient in energy terms.

  16. The biological inspiration for artificial neural nets • Brain tissue can do what it does (think, remember, perceive, control bodies etc) partly because of the electrochemical signals that it processes, and partly because of the chemical messages. • Artificial neural nets imitate the first of these, but not the second.

  17. The biological inspiration for artificial neural nets • The neurons in a brain work in parallel to perform their symbol processing (i.e., the individual neurons are operating simultaneously. This is quite unlike a conventional computer, where the programming steps are performed one after the other.

  18. The biological inspiration for artificial neural nets • The brains of all animals of any complexity consist of a number of these networks of neurons, each network specialised for a particular task. • There are many different types of neuron (over 100) in the human brain.

  19. Artificial neural nets • Note that neural nets are inspired by the organisation of brain tissue, but the resemblance is not necessarily very close. • Claims that a particular type of artificial neural net has been shown to demonstrate some property, and that this 'explains' the working of the human brain, should be treated with caution.

  20. Artificial neural nets • Note that a neural net is ideally implemented on a parallel computer (e.g. a connection machine). • However, since these are not widely used, most neural net research, and most commercial neural net packages, simulate parallel processing on a conventional computer.

  21. Neurodes • Neural nets are constructed out of artificial neurones (neurodes). The characteristics of these are: • each has one or more inputs (typically several). • Each input will have a weight, which measures how effective that inputis at firing the neurode as a whole. These weights may be positive (i.e. increasing the chance that the neurode will fire) or negative (i.e. decreasing the chance that the neurode will fire).

  22. Neurodes • These weights may change as the neural net operates. • Inputs may come from the outside environment, or from other neurodes • each has one output (but this output may branch, and go to several locations). • an output may go to the outside environment, or to another neurode.

  23. Neurodes • More properties of neurodes: • each is characterised by a summation function and a transformation function.

  24. Neurodes • The summation function is a technique for finding the weighted average of all the inputs. • These vary in complexity, according to the type of neural net. • The simplest approach is to multiply each input value by its weight and add up all these figures.

  25. Neurodes • The transformation function is a technique for determining the output of the neurode, given the combined inputs. • Again, these vary in complexity, according to the type of neural net. • The simplest approach is to have a particular threshold value - but the sigmoid function, to be discussed later, is more common.

  26. Neurodes • "[an artificial neuron is] a unit that accepts a bunch of numbers, and learns to respond by producing a number of its own." Aleksander & Morton, 1993.

  27. Thefunctions of a typical neurode. ai represents the activation of the neurode. This is also the output from the neurode aj Wj,i ai Transformation function Summation function  Activation ai Input links Output links

  28. Artificial neural nets • Different sorts of transformation function are available, and are favoured for different designs of ANN. • The three most common choices are • the step function, • the sign function, and • the sigmoid function

  29. +1 ai ai ai +1 +1 +1 inpi inpi inpi -1 -1 Step function Sigmoid function Sign function

  30. Artificial neural nets • As far as networks are concerned, they may or may not be organised into layers. • Usually, they are.

  31. Artificial neural nets • Networks organised into layers may be subdivided into those that simply have an inputlayer and an outputlayer, and those that have one or more intermediate layers, known as hidden layers.

  32.            A neural net with one input layer and one output layer (both containing 6 neurodes)

  33.                  A neural net with one input layer, one hidden layer, and one output layer (each containing 6 neurodes)

  34. How networks are used • Each input in a network corresponds to a single attribute of a pattern or collection of data. • The data must be numerical: qualitative aspects of the data, or graphical data, must be pre-processed to convert it into numbers before the network can deal with it.

  35. How networks are used • Thus, an image is converted into pixels (a number could be converted into a 6x8 dot matrix, and provide the input to 48 input neurodes). • Similarly, a fragment of sound would have to be digitised, and a set of commercial decision criteria would have to be coded before the net could deal with them.

  36. How networks are used • Similarly, values must be assigned to the outputs from the output nodes before they can be treated as 'the solution' to whatever problem the network was given.

  37. How networks are used • Neural nets are not programmed in the conventional way (we do not have techniques for 'hand-programming' a net). • Instead, they go through a learning phase, during which the weights are modified. After which the weights are clamped, and the system is ready to perform.

  38. How networks are used • Learning involves • entering examples of data as the input, • using some appropriate algorithm to modify the weights so that the output changes in the desired direction, • repeating this until the desired output is achieved.

  39. Example ofsupervised learning in a simple neural net • Suppose we have a net consisting of a single neurode. • The summation function is the standard version. • The transformation function is a step function.

  40. Example ofsupervised learning in a simple neural net • There are two inputs and one output, • We wish to teach this net the logical INCLUSIVE OR function, i.e. • if the values of both the inputs is 0, the output should be 0; • if the value of either or both the inputs is 1, the output should be 1.

  41. Example ofsupervised learning in a simple neural net • We will represent the values of the two inputs as X1 and X2, the desired output as Z, the weights on the two inputs as W1 and W2, the actual output as Y.

  42. Example ofsupervised learning in a simple neural net Input Weight Desired output X1 W1 Z Y W2 X2 Actual output

  43. Example ofsupervised learning in a simple neural net • The learning process involves repeated applying the four possible patterns of input: X1 X2 Z 0 0 0 0 1 1 1 0 1 1 1 1

  44. Example ofsupervised learning in a simple neural net • The two weights W1 and W2 are initially set to random values. Each time a set of inputs is applied, a value D is calculated as D = Z - Y (the difference between what you got and what you wanted) and the weights are adjusted.

  45. Example ofsupervised learning in a simple neural net • The new weight, V for a particular input i is given by Vi = Wi + a * D * Xi where a is a parameter which determines how much the weights are allowed to fluctuate in a particular cycle, and hence how quickly learning takes place. • An actual learning sequence might be as follows:

  46. Example ofsupervised learning in a simple neural net a = 0.2 threshold = 0.5 Iter- ation X1 X2 Z W1 W2 Y D V1 V2 _______________________________________ 1 0 0 0 0.1 0.3 0 0.0 0.1 0.3 0 1 1 0.1 0.3 0 1.0 0.1 0.5 1 0 1 0.1 0.5 0 1.0 0.3 0.5 1 1 1 0.3 0.5 1 0.0 0.3 0.5

  47. Example ofsupervised learning in a simple neural net a = 0.2 threshold = 0.5 Iter- ation X1 X2 Z W1 W2 Y D V1 V2 _______________________________________ 2 0 0 0 0.3 0.5 0 0.0 0.3 0.5 0 1 1 0.3 0.5 0 1.0 0.3 0.7 1 0 1 0.3 0.7 0 1.0 0.5 0.7 1 1 1 0.5 0.7 1 0.0 0.5 0.7

  48. Example ofsupervised learning in a simple neural net a = 0.2 threshold = 0.5 Iter- ation X1 X2 Z W1 W2 Y D V1 V2 _______________________________________ 3 0 0 0 0.5 0.7 0 0.0 0.5 0.7 0 1 1 0.5 0.7 1 0.0 0.5 0.7 1 0 1 0.5 0.7 0 1.0 0.7 0.7 1 1 1 0.7 0.7 1 0.0 0.7 0.7

More Related