860 likes | 1.05k Views
A new crossover technique in Genetic Programming. Janet Clegg Intelligent Systems Group Electronics Department. This presentation. Describe basic evolutionary optimisation Overview of failed attempts at crossover methods Describe the new crossover technique
E N D
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department
This presentation Describe basic evolutionary optimisation Overview of failed attempts at crossover methods Describe the new crossover technique Results from testing on two regression problems
Start by choosing a set of random trial solutions (population)
Each trial solution is evaluated (fitness / cost) 1.2 0.9 0.8 1.4 0.2 0.3 2.1 2.3 0.3 0.9 1.3 1.0 0.8 2.4 0.4 1.7 1.5 0.6 1.3 0.8 2.5 0.7 1.4 2.1
Parent selection Select mother 2.1 0.9 2.1 1.3 0.7 Select father 1.7 1.4 0.6 1.7 1.5
Perform crossover child 1 child 2
Mutation Probability of mutation small (say 0.1)
This provides a new population of solutions – the next generation Repeat generation after generation 1 select parents 2 perform crossover 3 mutate until converged
Two types of evolutionary optimisation Genetic Algorithms (GA) – Optimises some quantity by varying parameter values which have numerical values Genetic Programming (GP) – Optimises some quantity by varying parameters which are functions / parts of computer code / circuit components
Representation Traditional GA’s - binary representation e.g. 1011100001111 Floating point GA – performs better than binary e.g. 7.2674554 Genetic Program (GP) Nodes represent functions whose inputs are below the branches attached to the node
Crossover in a binary GA Mother 1 0 0 0 0 0 1 0 = 130 Father 0 1 1 1 1 0 1 0 = 122 Child 1 1 1 1 1 1 0 1 0 = 250 Child 2 0 0 0 0 0 0 1 0 = 2
Crossover in a floating point GA Mother father Min parameter value Max parameter value Offspring chosen as random point between mother and father
Traditional method of crossover in a GP mother father Child 2 Child 1
Motivation for this work Tree crossover in a GP does not always perform well Angeline and Luke and Spencer compared:- (1) performance of tree crossover (2) simple mutation of the branches difference in performance was statistically insignificant Consequently some people implement GP’s with no crossover - mutation only
Motivation for this work In a GP many people do not use crossover so mutation is the more important operator In a GA the crossover operator contributes a great deal to its performance - mutation is a secondary operator AIM:- find a crossover technique in GP which works as well as the crossover in a GA
Cartesian Genetic Programming (CGP) Julian Miller introduced CGP Replaces tree representation with directed graphs – represented by a string of integers The CGP representation will be explained within the first test problem CGP uses mutation only – no crossover
First simple test problem A simple regression problem:- Finding the function to best fit data taken from The GP should find this exact function as the optimal fit
The traditional GP method for this problem Set of functions and terminals
* x x x 1 - * Initial population created by randomly choosing functions and terminals within the tree structures (1-x) * (x*x)
Crossover by randomly swapping sub-branches of the parent trees mother father Child 2 Child 1
CGP representation Set of functions – each function identified by an integer Set of terminals – each identified by an integer
Creating the initial population 2 0 1 0 1 1 2 3 First integer is random choice of function 0 (+), 1 (-), or 2 (*) Second two integers are random choice of terminals 0 (1) or 1 (x) Next integers are random choice of inputs for the function from the set 0 (1) 1 (x) or node 2
Creating the initial population 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 output 4 5 6 2 3 random choice of inputs from 0 1 2 3 4 5 Terminals nodes random choice for output from 0 1 2 3 4 5 6 Terminals all nodes
+ x x x 1 * + 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 2 3 4 5 6 output output 5 2 3 (1*x) +(x+x) = 3x
Run the CGP with test data taken from the function Population size 30 28 offspring created at each generation Mutation only to begin with Fitness is the sum of squared differences between data and function
5 + x 1 x + * + 1 x 3 2 2 Result 0 0 1 2 2 1 1 2 2 0 3 2 0 5 1 5 2 3 4 5 6 output =
Statistical analysis of GP Any two runs of a GP (or GA) will not be exactly the same To analyse the convergence of the GP we need lots of runs All the following graphs depict the average convergence out of 4000 runs
Introducing some Crossover Pick random crossover point Parent 1 0 0 1 2 2 1 1 2 2 0 3 2 0 5 1 5 Parent 2 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 Child 1 0 0 1 2 2 1 1 3 1 0 2 3 2 4 1 5 Child 2 2 0 1 0 1 1 1 2 2 0 3 2 0 5 1 5
Random crossover point but must be between the nodes Parent 1 0 0 1 2 2 1 1 2 2 0 3 2 0 5 1 5 Parent 2 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 Child 1 0 0 1 2 2 1 13 1 0 2 3 2 4 1 5 Child 2 2 0 1 0 1 1 12 2 0 3 2 0 5 1 5
x x * x - * x + + 1 1 x - x + 1 + 0 0 1 1 1 22 3 2 2 4 1 0 3 5 6 2 3 4 5 6 output 6 3 5 4 2 3 2 2
Pick a random node along the string and swap this single node Parent 1 0 0 1 2 2 1 1 2 2 0 3 2 0 5 1 5 Parent 2 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 Child 1 0 0 1 2 2 1 1 3 1 0 2 3 2 4 1 5 Child 2 2 0 1 0 1 1 1 2 2 0 3 2 0 5 1 5
Each integer in child randomly takes value from either mother or father Parent 1 0 0 1 2 2 1 1 2 2 0 3 2 0 5 1 5 Parent 2 2 0 1 0 1 1 1 3 1 0 2 3 2 4 1 5 Child 1 2 0 1 0 2 1 1 21 0 2 2 251 5 Child 2 0 0 1 2 1 1 1 32 0 3 3 041 5
Comparison with random search GP with no crossover performs better than any of the trial crossover here How much better than a completely random search is it? The only means it will improve on a random search are by parent selection mutation
Comparison with a random search GP converges in 58 generations Random search 73 generations
GA performance compared with a completely random search GA tested on a large problem – A random search would have involved searching through 150,000,000 data points The GA reached the solution after testing 27,000 data points ( average convergence of 5000 GA runs) Probability of random search reaching solution in 27,000 trials is 0.00018 !!!!
f1 x6 f2 f3 f5 f4 x8 x7 x4 x3 x2 x1 f6 f7 x5 f1 { f2 [ f4( x1,x2 ), f5( x3,x4 ) ] , f3 [ f6( x5,x6 ), f7( x7,x8 ) ]} g1 { g2 [ g4( y1,y2 ), g5( y3,y4 ) ] , g3 [ g6( y5,y6 ), g7( y7,y8 ) ] } f1 { f2 [ f4( x1,x2 ), f5( x3,x4 ) ] , f3 [ f6( x5,x6 ), g7( y7,y8 )]}
g( x1) = f( x2) g f Good! x1 x2
g( x1) f( x2) g( x2) f( x1) g g( x2) f Good! x1 x2 f( x1)