1 / 31

On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM)

ICANN’06. On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM). Huicheng Zheng, Christophe Laurent and Grégoire Lefebvre 13th September 2006. Thanks to the MUSCLE Internal Fellowship ( http://www.muscle-noe.org ). Outline. Introduction Minimization of the ASSOM objective function

oliver
Download Presentation

On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICANN’06 On the Basis Learning Rule of Adaptive-Subspace SOM (ASSOM) Huicheng Zheng, Christophe Laurent and Grégoire Lefebvre 13th September 2006 Thanks to the MUSCLE Internal Fellowship (http://www.muscle-noe.org).

  2. Outline • Introduction • Minimization of the ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  3. rectangles circles triangles …… Motivation of ASSOM • Learning “invariance classes” with subspace learning and SOM [Kohonen. T., et al., 1997] • For example: spatial-translation invariance

  4. Applications of ASSOM • Invariant feature formation[Kohonen, T., et al., 1997] • Speech processing[Hase, H., et al., 1996] • Texture segmentation[Ruiz del Solar, J., 1998] • Image retrieval[De Ridder, D., et al., 2000] • Image classification[Zhang, B., et al., 1999]

  5. j A module representing the subspace L(j) ASSOM Modules Representing Subspaces Rectangular topology Hexagonal topology The module arrays in ASSOM

  6. Competition and Adaptation • Repeatedly: • Competition: The winner • Adaptation: For the winner and the modules i in its neighborhood • Orthonormalize the basis vectors N×N matrix:

  7. Transformation Invariance • Episodes correspond to signal subspaces. • Example: • One episode, S, consists of 8 vectors. Each vectoris translated in time with respect to the others.

  8. Episode Learning • Episode winner • Adaptation: for each sample x(s) in the episode X={x(s), sS} • Rotate the basis vectors • Orthonormalize the basis vectors

  9. Deficiency of the Traditional Learning Rule • Rotation operator pc(i)(x(s),t) is an N×N matrix. • N: input vector dimension • Approximately:NOP (number of operations) ∝ MN2 • M: subspace dimension

  10. Efforts in the Literature • Adaptive Subspace Map (ASM) [De Ridder, D., et al., 2000]: • Drop topological ordering • Perform a batch-mode updating with PCA • Essentially not ASSOM. • Replace the basis updating rule [McGlinchey, S.J., Fyfe, C., 1998] • NOP ∝ M2N

  11. Outline • Introduction • Minimization of the ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  12. Minimization of the ASSOMObjective Function where: (projection error) P(X): probability density function of X Solution: Stochastic gradient descent: : Learning rate function

  13. Minimization of the ASSOMObjective Function When is small: In practice, better stability has been observed by the modified form proposed in [Kohonen, T., et al., 1997]

  14. Minimization of the ASSOMObjective Function • corresponds to a modified objective function: • Solution to Em: • When is small:

  15. Outline • Introduction • Minimization of the ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  16. Insight on the Basis Vector Rotation • Recall: traditional learning

  17. Insight on the Basis Vector Rotation scalar Scalar projection • For fast computing, calculate first, then scale x(s) with to get • NOP ∝MN • Referred to as FL-ASSOM (Fast-Learning ASSOM)

  18. Insight on the Basis Vector Rotation

  19. Outline • Introduction • Minimization of the ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  20. Batch-mode Fast Learning(BFL-ASSOM) • Motivation: Re-use the previously calculated during module competition. • In the basic ASSOM, L(i) keeps changing with receiving of each component vector x(s). has to be re-calculated for each x(s).

  21. Batch-mode Rotation • Use the solution to the modified objective function Em: • Subspace remains the same for all the component vectors in the episode. We can now use calculated during module competition.

  22. Batch-mode Fast Learning where is a scalar defined by: • Correction is a linear combination of component vectors x(s) in the episode. • For each episode, one orthonormalization of basis vectors is enough.

  23. Outline • Introduction • Minimization of the ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  24. Experimental Demonstration • Emergence of translation-invariant filters • Episodes are drawn from a colored noise image • Vectors in episodes are subject to translation white noise image colored noise image • Example episode (magnified):

  25. Resulted Filters FL-ASSOM BFL-ASSOM Decrease of the average projection error e with learning step t: t

  26. Timing Results Times given in seconds for 1,000 training steps. M: subspace dimension N: input vector dimension VU: Vector Updating time WL: Whole Learning time

  27. Timing Results Change of vector updating time (VU) with input dimension N: Change of vector updating time (VU) with subspace dimension M: Vertical scales of FL-ASSOM and BFL-ASSOM have been magnified 10 times for clarity.

  28. Outline • Introduction • Minimization of ASSOM objective function • Fast-learning methods • Insight on the basis vector rotation • Batch-mode basis vector updating • Experiments • Conclusions

  29. Conclusions • The basic ASSOM algorithm corresponds to a modified objective function. • Updating of basis vectors in the basic ASSOM correponds to a scaling of the component vectors in the input episode. • In batch-mode updating, the correction to the basis vectors is a linear combination of component vectors in the input episode. • Basis learning can be dramatically boosted with the previous understandings.

  30. References • De Ridder, D., et al., 2000: The adaptive subspace map for image description and image database retrieval. SSPR&SPR 2000. • Hase, H., et al., 1996: Speech signal processing using Adaptive Subspace SOM (ASSOM). Technical Report NC95-140, The Inst. of Electronics, Information and Communication Engineers, Tottori University, Koyama, Japan. • Kohonen, T., et al., 1997: Self-Organized formation of various invariant-feature filters in the adaptive-subspace SOM. Neural Computation 9(6). • McGlinchey, S. J., Fyfe, C., 1998: Fast formation of invariant feature maps. EUSIPCO’98. • Ruiz del Solar, J., 1998: Texsom: texture segmentation using Self-Organizing Maps. Neurocomputing21(1–3). • Zhang, B., et al., 1999: Handwritten digit recognition by adaptive-subspace self-organizing map (ASSOM). IEEE Trans. on Neural Networks10:4.

  31. Thanks and questions?

More Related