1 / 28

KULIAH II JST: BASIC CONCEPTS

KULIAH II JST: BASIC CONCEPTS. Amer Sharif, S.Si, M.Kom. INTRODUCTION REVIEW. Neural Network definition: A massively parallel distributed processor of simple processing units (neuron) Store experiential knowledge and make it available for use

Download Presentation

KULIAH II JST: BASIC CONCEPTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KULIAH II JST:BASIC CONCEPTS Amer Sharif, S.Si, M.Kom

  2. INTRODUCTION REVIEW • Neural Network definition: • A massively parallel distributed processor of simple processing units (neuron) • Store experiential knowledge and make it available for use • Knowledge is acquired from the environment through learning process • Knowledge is stored as internerneuron connection strengths (synaptic weights)

  3. INTRODUCTION REVIEW • Benefits: • Nonlinearity • Input Output Mapping • Adaptivity • Evidential Response • Contextual Information • Fault Tolerance/Graceful Degrading • VLSI Implementability • Uniform Analysis & Design

  4. NEURON MODELLING Basic elements of neuron: • A set of synapses or connecting links • Each synapse is characterized by its weight • Signal xjat synapse j connected to neuron kis multiplied by synaptic weight wkj • Biasis bk • An adder for summing the input signals • An activation function for limiting the output amplitude of the neuron

  5. NEURON MODELLING • Block diagram of a nonlinier neuron

  6. NEURON MODELLING Note • x1, x2,…, xmare input signals • wk1, wk2,…, wkmare synaptic weights of neuron k • uk is the linier combiner output • bk is bias • is the activation function • yk is the output signal of the neuron

  7. NEURON MODELLING • If and bias is substituted for a synapse where x0= + 1 with weight wk0 = bk then and

  8. NEURON MODELLING • Modified block diagram of a nonlinier neuron

  9. 1.2 1 0.8 0.6 0.4 0.2 0 -2 -1 0 1 2 v ACTIVATION FUNCTIONS Activation Function types: • Threshold Function and • also known as the McCulloch-Pitts model

  10. ACTIVATION FUNCTIONS • Piecewise-Linear Function

  11. ACTIVATION FUNCTIONS • Sigmoid Function • S-shaped • Sample logistic function: • ais the slope parameter: the larger athe steeper the function • Differentiable everywhere increasing a

  12. wkj yk= wkj xj xj xj yi yk=yi + yj yj xj xj xj NEURAL NETWORKS AS DIRECTED GRAPHS • Neural networks maybe represented as directed graphs: • Synaptic links (linier I/O) • Activation links (nonlinier I/O) • Synaptic convergence • Synaptic divergence

  13. x0 =+1 x1 Output yk x2 . . . xm NEURAL NETWORKS AS DIRECTED GRAPHS • Architectural graph: partially complete directed graph

  14. FEEDBACK • Output of a system influences some of the input applied to the system • One or more closed paths of signal transmission around the system • Feedback plays an important role in recurrent networks

  15. w x’j (n) yk(n) xj(n) z-1 FEEDBACK • Sample single-loop feedback system • Output signal yk(n) is an infinite weighted summation of present and past samples of input signal xj(n) wis fixed weight z-1is unit-delay operator is sample of input signal delayed by l time units

  16. yk(n) wxj(0) 0 1 2 3 4 n FEEDBACK • Dynamic system behavior is determined by weight w w < 1 • w < 1 • System isexponentially convergent/stable • System posses infinite memory: Output depends on input samples extending into the infinite past • Memory is fading: influence of past samples is reduced exponentially with time n

  17. yk(n) wxj(0) 0 1 2 3 4 n yk(n) wxj(0) 0 1 2 3 4 n FEEDBACK • w = 1 • System is linearly divergent • w > 1 • System is exponentially divergent w = 1 w > 1

  18. input layer of source nodes output layer of neurons NETWORK ARCHITECTURES • Single-Layered Feedforward Networks • Neurons are organized in layers • “Single-layer” refers to output neurons • Source nodes supply to output neurons but not vice versa • Network is feedforward or acyclic

  19. Layer of output neurons Layer of hidden neurons Input layer of source nodes NETWORK ARCHITECTURES • One or more hidden layers • Hidden neurons enable extractions of higher-order statistic • Network acquires global perspective due to extra set of synaptic connections and neural interactions • Multilayer Feedforward Networks • 7-4-2 fully connected network: • 7 source nodes • 4 hidden neurons • 2 output neurons

  20. z-1 z-1 Outputs z-1 z-1 Unit-delay operators Inputs NETWORK ARCHITECTURES • Recurrent Networks • At least one feedback loop • Feedback loops affect learning capability and performance of the network

  21. KNOWLEDGE REPRESENTATION • Definition of Knowledge: • Knowledge refers to stored information or models used by a person or a machine to interpret, predict, and appropriately respond to the outside world • Issues: • What information is actually made explicit • How information is physically encoded for subsequent use • Knowledge representation is goal-directed • Good solution depends on good representation of knowledge

  22. KNOWLEDGE REPRESENTATION • Challenges faced by Neural Networks: • Learn the model of the world/environment • Maintain the model to be consistent with the real world to achieve the goals desired • Neural Networks may learn from a set of observations data in form of input-output pairs (training data/training sample) • Input is input signal and output is the corresponding desired response

  23. KNOWLEDGE REPRESENTATION • Handwritten digit recognition problem • Input signal: one of 10 images of digits • Goal: to identify image presented to the network as input • Design steps: • Select the appropriate architecture • Train the network with subset of examples (learning phase) • Test the network with presentation of data/digit image not seen before, then compare response of network with actual identity of the digit image presented (generalization phase)

  24. KNOWLEDGE REPRESENTATION • Difference with classical pattern-classifier: • Classical pattern-classifier design steps: • Formulate mathematical model of the problem • Validate model with real data • Build based on model • Neural Network design is: • Based on real life data • Data may “speak for itself” • Neural network not only provides model of the environment but also process the information

  25. ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS • AI systems must be able to: • Store knowledge • Use stored knowledge to solve problem • Acquire new knowledge through experience • AI components: • Representation • Knowledge is presented in a language of symbolic structures • Symbolic representation makes it relatively easy for human users

  26. ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS • Reasoning • Able to express and solve broad range of problems • Able to make explicit and implicit information known to it • Have a control mechanism to determine which operation for a particular problem, when a solution is obtained, or when further work on the problem must be terminated • Rules, Data, and Control: • Rules operate on Data • Control operate on Rules • The Travelling Salesman Problem: • Data: possible tours and cost • Rules: ways to go from city to city • Control: which Rules to apply and when

  27. ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS • Learning • Inductive learning: determine rules from raw data and experience • Deductive learning:use rules to determine specific facts Performance element Knowlegdge Base Learning element Environment

  28. ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS

More Related