220 likes | 372 Views
A Self-Organizing CMAC Network With Gray Credit Assignment. Ming-Feng Yeh and Kuang-Chiung Chang IEEE, Vol. 36, No. 3, 2006, pp. 623-635. Presenter : Cheng-Feng Weng 200 8 / 8/14. Outline. Motivation Objective Methods CMAC SOCMAC Experimental results Summary and conclusion Comments.
E N D
A Self-Organizing CMAC NetworkWith Gray Credit Assignment Ming-Feng Yeh and Kuang-Chiung Chang IEEE, Vol. 36, No. 3, 2006, pp. 623-635. Presenter : Cheng-Feng Weng 2008/8/14
Outline • Motivation • Objective • Methods • CMAC • SOCMAC • Experimental results • Summary and conclusion • Comments
Motivation • Limitions of the SOM: • The neighborhood relations between neurons have to be defined in advance. • The dynamics of the SOM algorithm cannot be described as a stochastic gradient on any energy function.
Objective • To incorporate the structure of the cerebellar-model-articulation-controller (CMAC) network into the SOM to construct a self-organizing CMAC (SOCMAC) network.
Methods • CMAC • SOCMAC • Performance Index(PI)
CMAC Model • Properties: • Using a supervised learning method • The information of the state is distributively stored in Ne memory elements • Fast learning speed(by table-lookup) • Good generalization ability
CMACExample block hypercube
CMAC Concept • The stored data yk for the state sk: • The updating rule: Memory content index Desired value of the state Memory elements Learning error
Gray Relational Analysis • It can be viewed as a similarity measure for finite sequences. • The fray relational coefficient between x and wi at the jth element as: • the reference vector x = (x1,x2,…,xn) • The comparative vector wi = (wi1,wi2,…,win) • Δij = |xj-wij|, Δmax=maximaxjΔij, Δmin=miniminjΔij, 0<ζ<1 • Gray relational grade: • The weighting factor α ≥ 0 • 0≤g≤1 Control factor
SOCMAC • Viewing as the SOM: • The input space of the CMAC can be viewed as a topological structure similar to the output layer of the SOM. • The output vector of that state is as the corresponding connection weight of the SOM. • The output of the state: • The ck,h of the state those addressed are 1, and the other are 0. • The winner state selected:
SOCMAC (con.) • The updating rule for the corresponding memory contents of the winning state is: • Gray credit assignment • The original function cannot determine which memory content is more responsible for the current error. Learning error
SOCMAC Procedure • Initialize the memory contents and necessary parameters. • Present an input training vector x(t) to the SOCMAC. • Calculate all state outputs by using (8). • Determine the winning state by using (10). • Update the memory contents by using (12). • If every value of the input training data is presented to the SOCMAC, then go to step 7; otherwise, go to step 2. • Reduce the learning rate by a small and fixed amount. • Go to step 2 if the selected termination criterion is not satisfied.
Memory Size • There are mx×mystates in the input space of the SOCMAC. • The entire memory size Nhis therefore smaller than mx × my, i.e., Nh < mx × my. • Nerepresents the generalization parameter (number of layers) and Ne ≥ 2. Example: Nh = Ne × Nb2 = 3× 32 = 27.
Neighborhood Region • the neighborhood function, denoted by Ω(k, k∗), ofthe winning state is defined by:
Simulations • To solve clustering problems for two artificial datasets and classification problems for five University of California at Irvine (UCI) benchmark datasets. • A performance index(PI): Center of the dth cluster Overall patterns
Experiment 1 21 artificial data
Experiment 2 150 artificial data
Conclusions • The new scheme simultaneously has the features of both the SOM and the CMAC. • The neighborhood region need not be defined in advance. • It distributes the learning error into the addressed hypercubes. • The convergence of the learning process has been proved. • Simulation results showed the effectiveness and feasibility of the proposed network.
Comments • Advantage • … • Drawback • ... • Application • …