360 likes | 380 Views
This study introduces a GMR neural network for solving inverse and discontinuous mapping problems, providing enhanced solutions and multiple results through a series of detailed experiments and examples. The GMR model is explained, and its applications in solving complex mapping structures are discussed. The network's characteristics, learning phases, and potential for accelerating processes with tree search techniques are explored.
E N D
The Generalised Mapping Regressor (GMR) neural network for inverse discontinuous problems Student : Chuan LU Promotor : Prof. Sabine Van Huffel Daily Supervisor : Dr. Giansalvo Cirrincione
Mapping Approximation Problem • Feedforward neural networks are : • universal approximators of nonlinear continuousfunctions (many-to-one, one-to-one) • they don’t yield multiple solutions • they don’t yield infinite solutions • they don’t approximate mapping discontinuities
conditional average of the target data Inverse and Discontinuous Problems • Mapping : multi-valued, complex structure. • Poor representation of the mapping by least squares approach (sum-of-squares error function) for feedforward neural networks. • Mapping with discontinuities.
kernel blending winner-take-all • Jacobs and Jordan • Bishop (ME extension) output mixture-of-experts gating network Network 1 Network 2 Network 3 input It partitions the solution between several networks. It uses a separate network to determine the parameters of each kernel, with a further network to determine the coefficients.
ME MLP Example #1
ME MLP Example #2
ME MLP Example #3
ME MLP Example #4
Characteristics: • approximate every kind of function or relation. • input : collection of components of x and youtput : estimation of the remaining components • output all solutions, mapping branches, equilevel hypersurfaces. Generalised Mapping Regressor( GMR ) (G. Cirrincione and M. Cirrincione, 1998)
clusters mapping branches GMR Basic Ideas function approximation pattern recognition Z (augmented) space unsupervised learning • coarse-to-fine learning • incremental • competitive • based on mapping recovery (curse of dimensionality) • topological neuron linking • distance • direction • linking tracking • branches • contours • open architecture
Training Set Learning Object Merging Recall-ing Linking object 1 branch 1 branch 2 links object merged INPUT pool of neurons object 3 object 2 GMR four phases
vigilance threshold x w4= x4 EXIN Segmentation Neural Network (EXIN SNN) (G. Cirrincione, 1998) • clustering Input/weight space
branch (object) neuron GMR Learning • EXIN SNN • high rz ( say r1 ) coarse quantization Z (augmented) space
GMR Learning • production phase • Voronoi sets domain setting Z (augmented) space
GMR Learning TS#3 TS#5 TS#1 TS#4 TS#2 • secondary EXIN SNNs • rz = r2 < r1 fine quantization Z (augmented) space Other levels are possible
GMR Coarse to fine Learning ( Example) object neuron Voronoi set fine VQ neurons object neuron
asymmetric radius ri neuron i Task 1 : GMR Linking • Voronoi set: setup of the neuron radius (domain variable)
Task 2 : k-nn branch and bound search technique Weight Space Linking candidates w3 w4 w5 d3 Linking direction d4 d5 w1 d1 d1 d2 w2 GMR Linking • distance test • direction test • create a link or strengthen a link • For one TS presentation: zi
Branch and Bound Accelerated Linking • neuron tree constructed during learning phase (multilevel EXIN SNN learning) • methods in linking candidate step (k-nearest-neighbors computation): • -BnB : < d1 , ( : linking factorpredefined) • k-BnB : k predefined.
83 % GMR Linking branch-and-boundin linkingexperimental results:
branch and bound (cont.) Apply branch and bound in learning phase ( labelling ) : • Tree construction • k-means • EXIN SNN • Experimental results (in the 3-D example) • 50% of labeling flops are saved
GMR Linking Example link
level 1 neuron branch 1 level 2 neuron branch 2 GMR Recalling Example • level one neurons : input within theirdomain • level two neurons : only connected ones • level zero neurons : isolated (noise)
Experiments spiral of Archimedes = a (a = 1)
Sparse regions further normalizing + higher mapping resolution Experiments
noisy data Experiments
GMR mapping of 8 spheres in a 3-D scene. Experiments contours: links among level one neurons
Conclusions GMR is able to : • solve inversediscontinuous problems • approximate every kind of mapping • yield all the solutions and the corresponding branches GMR can be accelerated by applying tree search techniques GMR needs: • interpolation techniques • kernels or projection techniques for high dimensional data • adaptive parameters
Thank you ! (shi-a shi-a)
w8 w7 l8= 0 b8 = 0 l7 = 0 b7 = 0 l1 = 0 b1 = 0 l3 = 0 b3 = 0 w3 l4 = 0 b4 = 0 w1 input w2 w4 l2= 0 b2= 0 w6 l5 = 0 b5 = 0 w5 l6 = 0 b6 = 0 connected neuron : level zero level two branch the winner branch GMR Recall l1 = 1 b1 = 1 • restricted distance r1 l3 = 2 b3 = 1 • level one test • linking tracking
input w8 • level one test • linking tracking GMR Recall w7 l8= 0 b8 = 0 l7 = 0 b7 = 0 l1 = 1 b1 = 1 l1 = 0 b1 = 0 l3 = 2 b3 = 1 l3 = 0 b3 = 0 w3 l4 = 0 b4 = 0 w1 w2 r2 w4 branch cross l2= 1 b2= 2 l2= 0 b2= 0 l2= 1 b2=1 w6 l5 = 0 b5 = 0 w5 l6 = 0 b6 = 0
input Two Branches Tow Branches w8 GMR Recall w7 l8= 0 b8 = 0 … until completion of the candidates l7 = 0 b7 = 0 l1 = 0 b1 = 0 l1 = 1 b1 = 1 l3 = 0 b3 = 0 l3 = 2 b3 = 1 w3 l4 = 0 b4 = 0 l4 = 1 b4 = 4 w1 w2 w4 l2= 0 b2= 0 l2= 1 b2= 1 l2= 1 b2= 2 w6 l4 = 1 b4 = 5 l5 = 2 b5 = 4 l5 = 0 b5 = 0 l4 = 1 b4 = 4 w5 l6 = 2 b6 = 4 l6 = 1 b6 = 6 l6 = 0 b6 = 0 l6 = 1 b6 = 4 • level one neurons : input within theirdomain • level two neurons : only connected ones • level zero neurons : isolated (noise) clipping
l1 = 1 b1 = 1 l3 = 2 b3 = 1 l4 = 1 b4 = 4 input l2= 1 b2= 1 l4 = 1 b4 = 4 l6 = 1 b6 = 4 w8 • Output= weight complements of the level one neurons GMR Recall w7 l8= 0 b8 = 0 • Outputinterpolation l7 = 0 b7 = 0 w3 w1 w2 w4 w6 w5