1 / 35

Architecture and Equilibra 结构和平衡 刘瑞华 罗雪梅 导师:曾平

Chapter 6. Architecture and Equilibra 结构和平衡 刘瑞华 罗雪梅 导师:曾平. Chapter 6 Architecture and Equilibria. Perface lyaoynov stable theorem. Chapter 6 Architecture and Equilibria 6.1 Neutral Network As Stochastic Gradient system.

evansthomas
Download Presentation

Architecture and Equilibra 结构和平衡 刘瑞华 罗雪梅 导师:曾平

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 Architecture and Equilibra结构和平衡刘瑞华 罗雪梅 导师:曾平

  2. Chapter 6 Architecture and Equilibria Perface lyaoynov stable theorem

  3. Chapter 6 Architecture and Equilibria6.1 Neutral Network As Stochastic Gradient system Classify Neutral network model By their synaptic connection topolgies and by how learning modifies their connection topologies synaptic connection topolgies how learning modifies their connection topologies

  4. Chapter 6 Architecture and Equilibria6.1 Neutral Network As Stochastic Gradient system

  5. Chapter 6 Architecture and Equilibria6.1 Neutral Network As Stochastic Gradient system • Three stochastic gradient systems represent the three main categories: 1)Feedforward supervised neural networks trained with the backpropagation(BP)algorithm. 2)Feedforward unsupervised competitive learning or adaptive vector quantization(AVQ)networks. 3)Feedback unsupervised random adaptive bidirectional associative memory(RABAM)networks.

  6. Chapter 6 Architecture and Equilibria6.2 Global Equilibra:convergence and stability Neural network :synapses , neurons three dynamical systems: synapses dynamical systems neuons dynamical systems joint synapses-neurons dynamical systems Historically,Neural engineers study the first or second neural network.They usually study learning infeedforward neural networks and neural stability in nonadaptive feedback neural networks. RABAM and ART network depend on joint equilibration of the synaptic and neuronal dynamical systems.

  7. Chapter 6 Architecture and Equilibria6.2 Global Equilibra:convergence and stability Equilibrium is steady state . Convergence is synaptic equilibrium. Stability is neuronal equilibrium. More generally neural signals reach steady state even though the activations still change.We denote steady state in the neuronal field Neuron fluctuate faster than synapses fluctuate. Stability - Convergence dilemma : The synapsed slowly encode these neural patterns being learned; but when the synapsed change ,this tends to undo the stable neuronal patterns.

  8. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms Competitve learning adpatively qunatizes the input pattern space charcaterizes the continuous distributions of pattern. We shall prove that: Competitve AVQ synaptic vector converge to pattern-class centroid. They vibrate about the centroid in a Browmian motion

  9. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms Comptetive AVQ Stochastic Differential Equations The Random Indicator function Supervised learning algorithms depend explicitly on the indicator functions.Unsupervised learning algorthms don’t require this pattern-class information. Centriod

  10. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms The Stochastic unsupervised competitive learning law: We want to show that at equilibrium We assume The equilibrium and convergence depend on approximation (6-11) ,so 6-10 reduces :

  11. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms Competitive AVQ Algorithms 1. Initialize synaptic vectors: 2.For random sample ,find the closet(“winning”)synaptic vector 3.Update the wining synaptic vectors by the UCL ,SCL,or DCL learning algorithm.

  12. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms Unsupervised Competitive Learning (UCL) defines a slowly deceasing sequence of learning coefficient Supervised Competitive Learning (SCL)

  13. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms Differential Competitive Learning (DCL) denotes the time change of the jth neuron’s competitive signal . In practice we only use the sign of (6-20) Stochastic Equilibrium and Convergence Competitive synaptic vector coverge to decsion-class centrols. May coverge to locally maxima.

  14. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms AVQ centroid theorem: if a competitive AVQ system converges,it converge to the centroid of the sampled decision class. Proof. Suppose the jth neuron in Fy wins the actitve competition. Suppose the jth synaptic vector codes for decision class Suppose the synaptic vector has reached equilibrium

  15. Chapter 6 Architecture and Equilibria6.3 Synaptic convergence to centroids:AVQ Algorithms

  16. Chapter 6 Architecture and Equilibria6.4 AVQ Convergence Theorem AVQ Convergence Theorem: Stochastic competitive learning systems are asymptotically stable,and synaptic vectors converge to centroids. Competitive synaptic vectors converge exponentially quikly to pattern-class centroids. Proof.Consider the random quadratic form L The pattern vectors x do not change in time.

  17. Chapter 6 Architecture and Equilibria6.4 AVQ Convergence Theorem The average E[L] as Lyapunov function for the sochastic competitice dynamical system. Assume: Noise process is zero-mean and independence of the noise process with “signal”process

  18. Chapter 6 Architecture and Equilibria6.4 AVQ Convergence Theorem So ,on average by the learning law 6-12, If any synaptic vector move along its trajetory. So, the competitive AVQ system is asymtotically stabel,and in gereral converges exponentially quickly to a locally equilibrium. Suppose Then every synaptic vector has Reached equilibrium and is constant .

  19. Chapter 6 Architecture and Equilibria6.4 AVQ Convergence Theorem Since p(x) is a nonnegative weigth function. The weighted integral of the learning difference must equal zero : So equilibrium synaptic vector equal centroids.Q.E.D

  20. Chapter 6 Architecture and Equilibria6.5 Global stability of feedback neural networks • Global stability is jointly neuronal-synaptics steady state. • Global stability theorems are powerful but limited. • Their power: • their dimension independence • nonlinear generality • their exponentially fast convergence to fixed points. • Their limitation: • do not tell us where the equilibria occur in the state space.

  21. Chapter 6 Architecture and Equilibra6.5 Global stability of feedback neural networks Stability-Convergence Dilemma • Stability-Convergence Dilemma arise from the asymmetry in neounal and synaptic fluctuation rates. • Neurons change faster than synapses change. • Neurons fluctuate at the millisecond level. • Synapses fluctuate at the second or even minute level. • The fast-changing neurons must balance the slow-changing synapses.

  22. Chapter 6 Architecture and Equilibria6.5 Global stability of feedback neural networks Stability-Convergence Dilemma 1.Asymmetry:Neurons in and fluctuate faster than the synapses in M. 2.stability: (pattern formation). 3.Learning: 4.Undoing: the ABAM theorem offers a general solution to stability-convergence dilemma.

  23. Chapter 6 Architecture and Equilibria6.6 The ABAM Theorem The ABAM Theorem (Adaptivebidirectional associative memory) The Hebbian ABAM and competitive ABAM models are globally stabel. Hebbian ABAM model: Competitive ABAM model , replacing 6-35 with 6-36

  24. Chapter 6 Architecture and Equilibria6.6 The ABAM Theorem If the positivity assumptions Then, the models are asymptotically stable, and the squared activation and synaptic velocities decrease exponentially quickly to their equilibrium values: Proof.the proof uses the bounded lyapunov functionL

  25. Chapter 6 Architecture and Equilibria6.6 The ABAM Theorem Make the difference to 6-37:

  26. Chapter 6 Architecture and Equilibria6.6 The ABAM Theorem To prove global stability for the competitve learning law 6-36 We prove the stronger asymptotic stable of the ABAM models with the positivity assumptions.

  27. Chapter 6 Architecture and Equilibria6.6 The ABAM Theorem Along trajectories for any nonzero change in any neuronal activation or any synapse. Trajectories end in equilibrium points. Indeed 6-43 implies: The squared velocities decease exponentially quickly because of the strict negativity of (6-43) and ,to rule out pathologies . Q.E.D

  28. Chapter 6 Architecture and Equilibria6.7 structural stability of unsuppervised learning and RABAM • Is unsupervised learning structural stability? • Structural stability is insensivity to small perturbations • Structural stability ignores many small perturbations. • Such perturbations preserve qualitative properties. • Basins of attractions maintain their basic shape.

  29. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM Random Adaptive Bidirectional Associative Memories RABAM Browian diffusions perturb RABAM model. The differential equations in 6-33 through 6-35 now become stochastic differential equations, with random processes as solutions. The diffusion signal hebbian law RABAM model:

  30. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM With the stochastic competitives law:

  31. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM With the stochastic competitives law:

  32. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM With noise (independent zero-mean Gaussian white-noise process). the signal hebbian noise RABAM model:

  33. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM RABAM Theorem. The RABAM model (6-46)-(6-48) or (6-50)-(6-54), is global stable.if signal functions are strictly increasing and amplification functions and are strictly positive, the RABAM model is asympotically stable. Proof. The ABAM lyapunov function L in (6-37) now defines a random process. At each time t,L(t) is a random variable. The expected ABAM lyapunov function E(L) is a lyapunov function for the RABAM.

  34. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM

  35. Chapter 6 Architecture and Equilibria6.7 Structural stability of unsuppervised learning and RABAM

More Related