280 likes | 492 Views
Cascade-correlation. By Kranthi & sudhan. Contents . Motivation Recollecting back-prop Cascading architecture Learning Algorithm Example Comparision with other systems. Motivation . Curse of dimensionality Simple network Determines the structure Fast learner . Recollect .
E N D
Cascade-correlation By Kranthi & sudhan cascade-correlation
Contents • Motivation • Recollecting back-prop • Cascading architecture • Learning Algorithm • Example • Comparision with other systems cascade-correlation
Motivation • Curse of dimensionality • Simple network • Determines the structure • Fast learner cascade-correlation
Recollect • What`s Back-propagation? • Problems with this alogorithm. • How CC is going to solve these problems? cascade-correlation
Cascade-Correlation CC combines 2 key ideas. • Cascade architecture. • Learning alogorithm. cascade-correlation
Cascade Architecture • Begins with some inputs and one or more outputs. • Every input is connected to every output. • Bias is permanently set to +1. cascade-correlation
Stage -1 x0 y1 x1 y2 x2 cascade-correlation
Stage-2 x0 y1 x1 z1 y2 x2 cascade-correlation
Stage-3 x0 y1 z1 z2 x1 y2 x2 cascade-correlation
Algorithm • Train stage 1. If error is not acceptable, proceed. • Train stage 2. If error is not acceptable proceed. • Etc cascade-correlation
Algorithm 1. Train all the connections ending at an output unit with a usual learning algorithm until the error of the net no longer decreases. 2 CC starts with a minimal network consisting only of an input and an output layer. Both layers are fully connected cascade-correlation
Algorithm 3. Generate the so-called candidate units. Every candidate unit is connected with all input units and with all existing hidden units. Between the pool of candidate units and the output units there are no weights. cascade-correlation
Algorithm 4. Try to maximize the correlation between the activation of the candidate units and the residual error of the net by training all the links leading to a candidate unit. Learning takes place with an ordinary learning algorithm. The training is stopped when the correlation scores no longer improves. cascade-correlation
Algorithm • In order to maximize this S,we compute partial derivative of S with respect to each candidate units incoming weight 5.Choose the candidate unit with the maximum correlation, freeze its incoming weights and add it to the net. cascade-correlation
Algorithm 6. To change the candidate unit into a hidden unit, generate links between the selected unit and all the output units. Since the weights leading to the new hidden unit are frozen, a new permanent feature detector is obtained. Loop back to step 2. 7. This algorithm is repeated until the overall error of the net falls below a given value cascade-correlation
Example -Two spirals problem cascade-correlation
Evolution of a 12-hidden unit solution to the two spirals problem cascade-correlation
Evolution of a 12-hidden unit solution to the two spirals problem cascade-correlation
Comparing CC with other learning algorithms • No need to predict the size,depth and connectivity pattern of network • Learns fast unlike some other algorithms. • At any time,we only train one layer of weights in network so results can be cached. cascade-correlation
Experimental results for hand written digits data sets cascade-correlation
Experimental results for the patients with severe head injury dataset cascade-correlation
Experimentals results for land satellite image cascade-correlation
Conclusion • Principle difference between CC and other algorithms is dynamic creation of hidden units • Speed up the learning process considerably cascade-correlation
References The Cascade Correlation Learning Architecture. Scott Fahlman and Christian Lebiere. Machine Learning, Neural and Statistical Classification by D. Michie, D.J. Spiegelhalter, C.C. Taylor (eds) cascade-correlation
Thank you cascade-correlation