400 likes | 1k Views
ARTIFICIAL NEURAL NETWORK. ********************. INTRODUCTION. The most intelligent device - “ Human Brain ”. The machine that revolutionized the whole world – “ computer ”. Inefficiencies of the computer has lead to the evolution of “ Artificial Neural Network ”.
E N D
ARTIFICIAL NEURAL NETWORK ********************
INTRODUCTION • The most intelligent device - “Human Brain”. • The machine that revolutionized the whole world – “computer”. • Inefficiencies of the computer has lead to the evolution of “ Artificial Neural Network”
DEFINITION • A neural network is designed as an interconnected system of processing elements each with a limited numbers of input and output rather than being programmed these system learns to recognize pattern.
STRUCTURE OF HUMAN BRAIN • Brain is divided into two parts – Left & Right • Left part – rules, concepts & calculations. • Follows “Rule Based Learning” so similar to “Expert System”. • Right part – picture, image, control. • Follows “Experience Based Learning” so similar to “Neural Network”
NEURONS • Size • No. of neurons • Process
BASIC ANN MODELS • Human’s capability in real time visual perception, speech understanding and sensory task that implement in machine
WHAT IS ANN • ANN classified two types • Recurrent • Non recurrent
HOW DIFFER FROM CONVETION COMPUTER • Conventional computer –single processor sequentially dictate every piece of the action • ANN – very large number of processing elements that individually deals with a piece of a big problem
Trained by learning example Memory & processing elements collocated Self organizing during learning Knowledge stored is adaptable Processing is anarchic Speed in millisecond Programmed with instruction Memory & processing elements separate Software dependent Knowledge stored in address memory Processing is autocratic Speed nanosecond ANN V\S VON NEUMANN COMPUTER
PERCEPTRON • Input are summed and passed to a scaling function and decide which one pass first
LEARNING LAWS HEBB'S LAW HOPFIELD'S LAW
HEBB'S LAW • If neurons receives an input from another neurons and if both highly active the weight between the neurons should be strengthened. BACK
HOPFIELD'S LAW • If the desired output and the input are both highly active or both inactive increment the connection weight by the learning rate otherwise decrement the weight by the learning rate.
BASIC STRUCTURE ANN
NETWORK ARCHITECTURE 1. SINGLE LAYER FEED FORWARD ANN 2. MULTILAYER FEED FORWARD ANN 3. RECURRENT ANN
1. SINGLE LAYER FEED FORWARD ANN • Neuron Organized in the form of layer. • Simple form, because network is feed-forward or acyclic type.
2. MULTILAYER FEED FORWARD ANN • Present more then one hidden layer are connect is called neurons or unit. • If, the size of i/p layer is very large then hidden layer extracts higher order statistic which is valuable.
LEARNING OF ANN 1. LEARNING WITH TEACHER 2. LEARNING WITHOUT TEACHER 3. LEARNING TASKS
1. LEARNING WITH TEACHER • Teacher teaches n/w giving environment into form of i/p-o/p pre-calculated example. • ANN observed i/p and compared predefine o/p. • Difference is calculated refer as error signal.
2. LEARNING WITHOUT TEACHER • Is involved inexploring environmentbecause right input response available. • System receive on i/p in environment and process o/p response.
UNSUPERVISED LEARNING • Self origination learning because no external teacher. • Tuned the regularities after optimizedi/p.
3. LEARNING TASKS • We can recognition last encounter person using voice or smelling in tanning section or define particular class. • Decision space is divided into region and region associated with class.
* CONTROL * ADAPTATION * GENERALIZATION * PROBABILITISTIC ANN
CONTROL • A critical part of system are maintained by controller. • Relevance of learning control should be supervising because “after all thehuman brain is computer”.
ADAPTATION • Info being generated by the environment very with time. • And also generate variation of environment network & never stop. • This learning is celled continues learning or learning of fly.
GENERALIZATION • Training example as possible that means i/p o/p mapping computed by network is correct. • Many i/p o/p example end up memorized the training data. • Finding future data but not finding true understanding function. • Network over trained it losses the ability to generalize between similar i/p o/p.
ADVANTAGES • Several Ann model available to chose in particular problem. • They are very fast. • Increase Accuracy ,result in cost saving. • Represent any function ,there for they called “universal approximation”. • Ann are able to learn representative example by back propagation error.
LIMITATION • LOW LEARNING RATE: problem require large but complex network. • FORGETFULL : forget old data and training new ones. • IMPRECISION : not provide precise numerical answer. • BLACK BOX APPROACH : we cant see physical part of training transfer data. • LIMITED FLEXIBILITY : implemented only one system available.
APPLICATION • TIME SERISE PREDICTION : • 1.forcastin:sort time evolution. • 2.Modelling :feature of long term. • 3.charecterition: define fundamental properties. • SPEECH GENERATION :it was training to pronounce writing English text. • SPEECH RECOGNITION: speech convert into written text by markon model using some symbols. • AUTONOMOUS VEHICLE NEVIGATION: this is vision based and robot guidance method.
HAND WRITING RECOGNITION: hidden layer are reduced free parameter and enhance provide by the writing. • IN ROBOTICS FIELD : a device of AI which behave just like human.
CONCLUSION At last I want to say that after 200 or 300 years neural networks is so developed that it can find the errors of even human beings and will be able to rectify that errors and make human being more intelligent.