120 likes | 246 Views
Self-Organizing Potential Field Network: A New Optimization Algorithm. Lu Xu and Tommy Wai Shing Chow TNN, Vol.21 2010, pp. 1482–1495 Presenter : Wei- Shen Tai 20 10 / 10/20. Outline . Introduction Background SOMA Self-Organizing Potential Field Network Simulations and results
E N D
Self-Organizing Potential Field Network:A New Optimization Algorithm Lu Xu and Tommy WaiShing Chow TNN, Vol.21 2010, pp. 1482–1495 Presenter : Wei-Shen Tai 2010/10/20
Outline • Introduction • Background • SOMA • Self-Organizing Potential Field Network • Simulations and results • Analysis of SOPFN algorithm • Conclusion • Comments
Motivation • Most optimization algorithms • Individuals only learn from the best candidate solution even it is far from the global optimum. • They explores a larger search space, but at the expense of convergence rate.
Objective • A new optimization algorithm • Each candidate solution can effectively reach the optimum in low search space and computation complexity.
Background • Self-organizing migrating algorithm (SOMA) • Updates every individual by a “migration loop” to generate a series of candidate solutions. • Particle swarm optimization (PSO) • At each time step, every particle moves toward the direction of the best position among all particles’ previous positions. • Self-organizing and self-evolving neurons (SOSEN) • Each neuron evolves using SA and cooperates with other neurons by a self-organizing operator. • Search space is enlarged by multiple neurons to enhance the convergence rate for finding the optimum.
Self-organizing potential field strategy • The cooperation behavior is a self-organizing procedure that the neurons subjected to the winning neuron’s neighborhood are trained. • The competition behavior models the network as a potential field similar to the vector potential field used in mobile robot.
Self-organizing potential field network algorithm • Initialization • Randomize the initial weights of M × N neurons. • Construction of the Potential Field • Target neuron • Obstacle neuron • 1-D Weight Updating • For every neuron i, randomly choose an integer k ∈ [1,D]. • Self-adaption: reassignthe target neuron c and obstacle neuron r. • Stop when stopping criteria are satisfied, go step 3 otherwise.
Conclusion • SOPFN • A new evolutionary algorithm that models the search space as a self-organizing potential field. • In the competitive behavior • The target and obstacle neurons are found to speeds up the convergence rate and increases the probability of escaping from the local optimum. • In the cooperative behavior • The winner’s neighboring neurons are updated to generate new weights at each generation.
Comments • Advantage • This proposed model is feasible for effectively finding the optimum in low computational complexity but high convergence speed. • The search space is constrained in a fixed neural network and the candidate solution can be more abundant by self-organizing potential field strategy. • Drawback • The number of map size is a crucial factor for determining the search space and computational complexity. Nevertheless, the performance comparison of different map size was not discussed in this paper. • Application • Optimization problems