1 / 52

CHAPTER 16

CHAPTER 16. Adaptive Resonance Theory. Objectives. There is no guarantee that, as more inputs are applied to the competitive network, the weight matrix will eventually converge .

chapa
Download Presentation

CHAPTER 16

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 16 Adaptive Resonance Theory

  2. Objectives • There is no guarantee that, as more inputs are applied to the competitive network, the weight matrix will eventually converge. • Present a modified type of competitive learning, called adaptive resonance theory (ART), which is designed to overcome the problem of learning stability.

  3. Theory & Examples • A key problem of the Grossberg network and the competitive network is that they do NOT always from stable clusters (or categories). • The learning instability occurs because of the network’s adaptability (or plasticity), which causes prior learning to be eroded by more recent learning.

  4. Stability / Plasticity • How can a system be receptive to significant new patterns and yet remain stable in response to irrelevant patterns? • Grossberg and Carpenter developed the ART to address the stability/plasticity dilemma. • The ART networks are based on the Grossberg network of Chapter 15.

  5. Key Innovation The key innovation of ART is the use of “expectations.” • As each input is presented to the network, it is compared with the prototype vector that is most closely matches (the expectation). • If the match between the prototype and the input vector is NOT adequate, a new prototype is selected. In this way, previous learned memories (prototypes) are not eroded by new learning.

  6. Overview Grossberg competitive network Basic ART architecture

  7. Grossberg Network • The L1-L2 connections are instars, which performs a clustering (or categorization) operation. When an input pattern is presented, it is multiplied (after normalization) by the L1-L2 weight matrix. • A competition is performed at Layer 2 to determine which row of the weight matrix is closest to the input vector. That row is then moved toward the input vector. • After learning is complete, each row of the L1-L2 weight matrix is a prototype pattern, which represents a cluster (or a category) of input vectors.

  8. ART Networks -- 1 • Learning of ART networks also occurs in a set of feedback connections from Layer 2 to Layer 1. These connections are outstars which perform pattern recall. • When a node in Layer 2 is activated, this reproduces a prototype pattern (the expectation) at layer 1. • Layer 1 then performs a comparison between the expectation and the input pattern. • When the expectation and the input pattern are NOT closely matched, the orienting subsystem causes a resetin Layer 2.

  9. ART Networks -- 2 • The reset disables the current winning neuron, and the current expectation is removed. • A new competition is then performed in Layer 2, while the previous winning neuron is disable. • The new winning neuron in Layer 2 projects a new expectation to Layer 1, through the L2-L1 connections. • This process continues until the L2-L1 expectation provides a close enough match to the input pattern.

  10. ART Subsystems Layer 1 Comparison of input pattern and expectation. L1-L2 Connections (Instars) Perform clustering operation. Each row of W1:2 is a prototype pattern. Layer 2 Competition (Contrast enhancement) L2-L1 Connections (Outstars) Perform pattern recall (Expectation). Each column of W2:1 is a prototype pattern Orienting Subsystem Causes a reset when expectation does not match input pattern Disables current winning neuron

  11. Layer 1

  12. Layer 1 Operation • Equation of operation of Layer 1: • Output of Layer 1: Excitatory input: Input pattern + L1-L2 expectation Inhibitory input: Gain control from L2

  13. Excitatory Input to L1 • The excitatory input: • Assume that the jth neuron in Layer 2 has won the competition, i.e., • The excitatory input to Layer 1 is the sum of the input pattern and the L2-L1 expectation.

  14. Inhibitory Input to L1 • The inhibitory input – the gain control • The inhibitory input to each neuron in Layer 1 is the sum of all of the outputs of Layer 2. • The gain control to Layer 1 will be one when Layer 2 is active (one neuron has won the competition), and zero when Layer 2 is inactive (all neurons having zero output).

  15. Steady State Analysis -- 1 • The response of neuron i in Layer 1: • Case 1: Layer 2 is inactive – eachIn steady state:If thenIf thenThe output of Layer 1 is the same as the input pattern

  16. Steady State Analysis -- 2 • Case 2: Layer 2 is active – andIn steady state:Layer 1 is to combine the input vector with the expectation from Layer 2. Since both the input and the expectation are binary pattern, we will use a logic AND operation to combine the two vectors. if either or is equal to 0 if both and are equal to 1

  17. Layer 1 Example • Let • Assume that Layer 2 is active and neuron 2 of Layer 2 wins the competition.

  18. Response of Layer 1

  19. Layer 2 From the orienting subsystem

  20. Layer 2 Operation excitatory input • Equation of operation of Layer 2:The rows of adaptive weights , after training, will represent the prototype patterns. on-center feedback adaptive instar inhibitory input off-surround feedback

  21. Layer 2 Example • Let

  22. t Response of Layer 2

  23. Orienting Subsystem • Determine if there is a sufficient match between the L2-L1 expectation (a1) and the input pattern (p)

  24. Orienting Subsyst. Operat. • Equation of operation of the Orienting Subsystem:excitatory input:inhibitory input: • Whenever the excitatory input is larger than the inhibitory input, the Orienting Subsystem will be driven on. inhibitory input excitatory input

  25. Steady State Operation • Steady state:Let , then if , or if (vigilance)The condition that will cause a reset of Layer 2.

  26. Vigilance Parameter • . The term  is called the vigilance parameter and must fall in the range • If is close to 1, a reset will occur unless is close to • If is close to 0, need not be close to to present a reset. • , whenever Layer 2 is active.The orienting subsystem will cause a reset when there is enough of a mismatch between and

  27. Orienting Subsystem Ex. • Suppose that • In this case a reset signalwill be sent to Layer 2,since is positive. t

  28. Learning Law • Two separate learning laws:one for the L1-L2 connections,(instar) and another for L2-L1connections (outstar). • Both L1-L2 connections and L2-L1 connections are updated at the same time.Whenever the input and theexpectation have an adequate match. • The process of matching, and subsequentadaptation, is referred to as resonance.

  29. Subset / Superset Dilemma • Suppose that ,so that the prototype patterns are • If the output of Layer 1 isthen the input to Layer 2 will be • Both prototype vectors have the same inner product with a1, even though the 1st prototype is identical to a1 and the 2nd prototype is not.This is called subset/superset dilemma.

  30. Subset / Superset Solution • One solution to the subset/superset dilemma is to normalize the prototype patterns. • The input to Layer 2 will then be • The first prototype has the largest inner product with a1. The first neuron in Layer 2 will be active.

  31. Learning Law: L1-L2 • Instar learning with competition: • When neuron i of Layer 2 is active, the ith row of , , is moved in the direction of a1. The learning law is that the elements of compete, and thereforeis normalized.

  32. Fast Learning • For fast learning, we assume that the outputs of Layer 1 and Layer 2 remain constantuntil the weights reach steady state. • assume that and setCase 1:Case 2:Summary:

  33. Learning Law: L2-L1 • Typical outstar learning:If neuron j in Layer 2 is active(has won the competition), then column j of is moved towarda1. • Fast learning: assume that andColumn j of converges to the output of Layer 1, a1, which is a combination of the input pattern and the appropriate prototype pattern. The prototype pattern is modified to incorporate the current input pattern.

  34. ART1 Algorithm Summary 0. Initialization: The initial is set to all 1’s. Every elements of the initial is set to . 1. Present an input pattern to the network.Since Layer 2 is NOT active on initialization, the output of Layer 1 is . 2. Compute the input to Layer 2, , and activatethe neuron in Layer 2 with the largest inputIn case of tie, the neuron with the smallest index is declared the winner.

  35. Algorithm Summary Cont. 3. Compute the L2-L1 expectation (assume that neuron j of Layer 2 is activated): 4. Layer 2 is active. Adjust the Layer 1 output to include the L2-L1 expectation: 5. Determine the degree of match between the input pattern and the expectation (Orienting Subsystem): 6. If , then set , inhibit it until an adequate match occurs (resonance), and return to step 1.If , then continue with step 7.

  36. Algorithm Summary Cont. 7. Updaterowj of when resonance has occurred: 8. Updatecolumnj of : 9. Remove the input pattern, restore all inhibited neurons in Layer 2, and return to step 1. • The input patterns continue to be applied to the network until the weights stabilize (do not change). • ART1 network can only be used for binary input patterns.

  37. Solved Problem: P16.5 Train an ART1 network using the parameters and , and choosing (3 categories), and using the following three input vectors: Initial weights: 1-1: Compute the Layer 1 response:

  38. P16.5 Continued 1-2: Compute the input to Layer 2 Since all neurons have the same input, pick the first neuron as winner. 1-3: Compute the L2-L1 expectation

  39. P16.5 Continued 1-4: Adjust the Layer 1 output to include the expectation 1-5: Determine the match degree: Therefore (no reset) 1-6: Since , continued with step 7. 1-7: Resonance has occurred, update row 1 of

  40. P16.5 Continued 1-8: Update column 1 of : 2-1: Compute the new Layer 1 response (Layer 2 inactive): 2-2: Compute the input to Layer 2: Since neurons 2 and 3 have the same input, pick the second neuron as winner:

  41. P16.5 Continued 2-3: Compute the L2-L1 expectation: 2-4: Adjust the Layer 1 output to include the expectation 2-5: Determine the match degree: Therefore (no reset) 2-6: Since , continued with step 7.

  42. P16.5 Continued 2-7: Resonance has occurred, update row 2 of 2-8: Update column 2 of : 3-1: Compute the new Layer 1 response: 3-2: Compute the input to Layer 2:

  43. P16.5 Continued 3-3: Compute the L2-L1 expectation: 3-4: Adjust the Layer 1 output to include the expectation 3-5: Determine the match degree: Therefore (no reset) 3-6: Since , continued with step 7.

  44. P16.5 Continued 3-7: Resonance has occurred, update row 1 of 3-8: Update column 2 of : • This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

  45. Solved Problem: P16.6 Repeat Problem P16.5, but change the vigilance parameter to . • The training will proceed exactly as in Problem P16.5, until pattern p3 is presented. 3-1: Compute the Layer 1 response: 3-2: Compute the input to Layer 2:

  46. P16.6 Continued 3-3: Compute the L2-L1 expectation: 3-4: Adjust the Layer 1 output to include the expectation 3-5: Determine the match degree: Therefore (reset) 3-6: Since , set , inhibit it until an adequate match occurs (resonance), and return to step 1.

  47. P16.6 Continued 4-1: Recompute the Layer 1 response: (Layer 2 inactive) 4-2: Compute the input to Layer 2: Since neuron 1 is inhibited, neuron 2 is the winner: 4-3: Compute the L2-L1 expectation: 4-4: Adjust the Layer 1 output to include the expectation

  48. P16.6 Continued 4-5: Determine the match degree: Therefore (reset) 4-6: Since , set , inhibit it until an adequate match occurs (resonance), and return to step 1. 5-1: Recompute the Layer 1 response: 5-2: Compute the input to Layer 2: Since neurons 1 & 2 are inhibited, neuron 3 is the winner:

  49. P16.6 Continued 5-3: Compute the L2-L1 expectation: 5-4: Adjust the Layer 1 output to include the expectation 5-5: Determine the match degree: Therefore (no reset) 5-6: Since , continued with step 7.

  50. P16.6 Continued 5-7: Resonance has occurred, update row 3 of 5-8: Update column 2 of : • This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered.

More Related