280 likes | 458 Views
Associative Learning in Hierarchical Self Organizing Learning Arrays. Janusz A. Starzyk, Zhen Zhu, and Yue L i School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. Organization. Introduction Network structure Associative learning
E N D
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A.
Organization • Introduction • Network structure • Associative learning • Simulation results • Conclusions and future work
Introduction - SOLAR • SOLAR –Self Organizing Learning Array • A concept inspired by the structure of biological neural networks • Regular, two or three-dimensional array of identical processing cells, connected to programmable routing channels • Self-organizing in individual cells and the network structure
Introduction - SOLAR • SOLAR vs. ANN • Deep multi-layer structure with sparse connections • Self organized neuron functions • Dynamic selection of interconnections • Hardware efficiency • Online learning A 15 X 7 SOLAR A 15 X 3 ANN
Introduction - Associative Learning • Hetero-associative (HA) • To associate different types of input signals e. g. a verbal command with an image • Auto-associative (AA) • To recall a pattern from a fractional part e. g. an image with missing part • The proposed approach: • An associative learning network in a hierarchical SOLAR structure - HA and AA www.idi.ntnu.no/~keithd/ classes/advai/lectures/assocnet.ppt
Introduction • Network structure • Associative learning • Simulation results • Conclusions and future work
Network Structure • Two or three dimensional multi-layer regular structure • 2 D networks: Input span – rows and network depth – columns • 3 D networks, better for image applications • “Small world” network connection • More local connections with short Euclidean distance (as in biological neural networks) • Few distant connections
Network Structure • Hierarchical network connection • Each neuron only connects to the preceding layer • Neuron connections: • Redundant initial inputs to be refined in learning • 2 inputs (I1 / I2) and 1 output O • Feed-forward and feed-back links
Introduction • Network structure • Associative learning • Simulation results • Conclusions and future work
Associative learning – feed-forward • Semi-logic inputs and internal signals: • scaling from 0 to 1, 0.5 = unknown; • 0 = determinate low, 1 = determinate high; • > 0.5 = weak high, < 0.5 = weak low. • The I1/I2 relationship is are found with: • P(I1 is low), P(I1 is high), P(I2 is low) & P(I2 is high) • The joint probabilities, e.g. P(I2 is low | I1 is low) • The conditional probabilities, e.g.
Associative learning – feed-forward • Compare the conditional probabilities against a confidence interval: , where N is # samples. • If P(I2 | I1) – CI > threshold, I2 can be implied from I1
Associative learning – feed-forward A neuron is an associative neuron if I1 can be implied from I2 and I2 can be implied from I1, otherwise it is a transmitting neuron Six possible I1/I2 distributions for associative neurons. A semi-logic function is designed for each one.
Associative learning – feed-forward • In an associative neuron: • Functions are designed for data transformation and feedback calculation. • f1 to f4– for data centered in one dominant quadrant. • f5 to f6– for data mainly in two quadrants • Neuron output is 0.5 with an unknown input.
Associative learning – feed-forward • In a transmitting neuron: • The input with higher entropy (dominant input) is transmitted to O, with the other is ignored. • I1 is the dominant input if • O may be an input to other neurons. • O receives feedback Of from connected neurons, which generate feedback to its inputs.
Associative learning – feedback • The network generates feedback to the unknown inputs through association.
Associative learning – feedback • N1 -> transmitting neurons • Of is passed back to the input. • N2 -> associative neurons with determined inputs • Feedback takes no effect and information passes forward. • N3 -> associative neurons with active feedback and inactive input(s) • Of creates feedbacks I1f and I2f through the function f, • These neurons only pass information backwards. • N4 -> actively associating neurons with inactive feedback • If one of their inputs is inactive, it will be overwritten based on its association with the other input and the neuron’s function f.
Associative learning – feedback • Calculation of the feedback (using f5): With an active output feedback, I1f is determined based on f5 and weighted using w5. w5 measures the quality of learning.
Introduction • Network structure • Associative learning • Simulation results • Conclusions and future work
Simulation results • The TAE database (from University Wisconsin-Madison) • 151 instances, 5 features and 3 classes • The Iris plants database • 150 instances 4 features and3 classes • The glass identification Database • 214 instances, 9 features and 6 classes • Image Recovery • Two letters: B and J
Simulation results - TAE database Features coded into binary format with sliding bars and classified using orthogonal codes: Not hierarchical; Connections distribution Gaussian; vertically (STD = 30) and horizontally (STD = 5); correct rate = 62.33 %
Simulation results - Iris database Not hierarchical; Connections distribution Gaussian; vertically (STD = 30) and horizontally (STD = 5); correct rate = 73.74 %
Simulation results - Iris database Hierarchical; vertical connections 80% Gaussian (STD = 2) and 20% uniform; correct rate improved to 75.33 %
Simulation results - Iris database • The hierarchical structure appears advantageous. • Using equal number of bits for features and class IDs gives better rate. • Performance further improved to 86%with mixed feature/classification bits.
Simulation results – glass ID database • The depth of learning is related to the complexity of the target problem. • With more classes, more actively associating neurons and more layers are needed. Average number of actively associating neurons per layer, with 3 / 6 classes
Simulation results - Image Recovery • A 2-D image recovery task. • 200 patterns are generated by adding random noise to 2 black-white images of letters B and J. The network was trained with 190 patterns and tested using 10 patterns. • Mean correct classification rate: 100% Training patterns An Average image of training patterns
Simulation results - Image Recovery Testing result and recovered images Testing result and recovered image using input redundancy
Introduction • Network structure • Associative learning • Simulation results • Conclusions and future work
Conclusion and Future Work • SOLAR has a flexible and sparse interconnect structure designed to emulate the organization of a human cortex • It handles a wide variety of machine learning tasks including image recognition, classification and data recovery, and is suitable for online learning • The associative learning SOLAR is adaptive network with feedback and inhibitory links • It discovers the correlation between inputs and establishes associations inside the neurons • It can perform auto associative and hetero associative learning • It can be modified to perform value driven interaction with environment