1 / 33

Biologically Inspired Computing: Evolutionary Algorithms: Some Details and Examples

Biologically Inspired Computing: Evolutionary Algorithms: Some Details and Examples. This is lecture 4 of `Biologically Inspired Computing’ Contents: Steady state and generational EAs, basic ingredients, example run-throughs on the TSP, example encodings, hillclimbing, landscapes.

Download Presentation

Biologically Inspired Computing: Evolutionary Algorithms: Some Details and Examples

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biologically Inspired Computing: Evolutionary Algorithms: Some Details and Examples This is lecture 4 of `Biologically Inspired Computing’ Contents: Steady state and generational EAs, basic ingredients, example run-throughs on the TSP, example encodings, hillclimbing, landscapes

  2. A Standard Evolutionary Algorithm The algorithm whose pseudocode is on the next slide is a steady state, replace-worst EA with tournament selection, using mutation, but no crossover. Parameters are popsize, tournament size, mutation-rate. It can be applied to any problem; the details glossed over are all problem-specific.

  3. A steady state, mutation-only, replace-worst EA with tournament selection 0. Initialise: generate a population of popsize random solutions, evaluate their fitnesses. • Run Select to obtain a parent solution X. • With probability mute_rate,mutate a copy of X to obtain a mutant M (otherwise M = X) • Evaluate the fitness of M. • Let W be the current worst in the population (BTR). If M is not less fit than W, then replace W with M. (otherwise do nothing) • If a termination condition is met (e.g. we have done 10,000 evaluationss) then stop. Otherwise go to 1. Select: randomly choose tsize individuals from the population. Let c be the one with best fitness (BTR); return X.

  4. A generational, elitist,crossover+mutation EA with Rank-Based selection 0. Initialise: generate a population G of popsize random solutions, evaluate their fitnesses. • Run Select 2*(popsize – 1) times to obtain a collection I of 2*(popsize-1) parents. • Randomly pair up the parents in I (into popsize – 1 pairs) and apply Vary to produce a child from each pair. Let the set of children be C. • Evaluate the fitness of each child. • Keep the best in the population G (BTR) and delete the rest. • Add all the children to G. • If a termination condition is met (e.g. we have done 100 or more generations (runs through steps 1—5) then stop. Otherwise go to 1,

  5. A generational, elitist,crossover+mutation EA with Rank-Based selection, continued … Select: sort the contents of G from best to worst, assigning rank popsize to the best, popsize-1 to the next best, etc …, and rank 1 to the worst. The ranks sum to F = popsize(popsize+1)/2 Associate a probability Rank_i/F with each individual i. Using these probabilities, choose one individual X, and return X. Vary: 1. With probability cross_rate, do a crossover: I.e produce a child by applying a crossover operator to the two parents. Otherwise, let the child be a randomly chosen one of the parents. 2. Apply mutation to the child. 3. Return the mutated child.

  6. Merge parent and child populations Produce children from the parents Select parents Gen t Parents Children Children And remove some so that popsize is maintained The basic ingredients A running EA can be seen as a sequence of generations. A generation is simply characterised by a population. An EA goes from generation t to generation t+1 in 3 steps: Gen t

  7. Select parents Parents The basic ingredients A running EA can be seen as a sequence of generations. A generation is simply characterised by a population. An EA goes from generation t to generation t+1 in 3 steps: Gen t+1 ETC … as on last slide, leading to Gen t + 2, and so on.

  8. Steady State vs Generational There are two extremes: Steady state: population only changes slightly in each generation. (e.g. select 1 parent, produce 1 child, add that child to pop and remove the worst) Generational: population changes completely in each generation. (select some [could still be 1] parent(s), produce popsize children, they become the next generation. Although, in practice, if we are using a generationalEA, we usually use elitism, which means the new generation always contains the best solution from the previous generation, the remaining popsize-1 individuals being new.

  9. Selection and Variation A selection method is a way to choose a parent from the population, in such a way that the fitter an individual is, the more likely it is to be selected. A variation operator or genetic operator is any method that takes in a (set of) parent(s), and produces a new individual (called child). If the input is a single parent, it is called a mutation operator. If two parents, it is called crossover or recombination.

  10. Replacement A replacement method is a way to decide how to produce the next generation from the merged previous generation and children. E.g. we might simply sort them in order of fitness, and take the best popsize of them. What else might we do instead?

  11. Encodings We want to evolve schedules, networks, routes, coffee percolators, drug designs – how do we encode or represent solutions? How you encode dictates what your operators can be, and certain constraints that the operators must meet.

  12. E.g. Bin-Packing The Problem: You have items of different sizes or weights: e.g: 1 (30) 2 (30) 3 (30) 4 (20) 5 (10) And you have several `bins’ with a max capacity, say 40. Find a way to pack them into the smallest number of bins. This is a hard problem. We will look at theequalize-bins version, also hard: pack these items into N bins (we will choose N = 3), with arbitrary capacity, so that the weights of the 3 bins are as equal as possible. Example Encoding; Have nitems numbers, each between 1 and 3, So: 1,2,1,3,3 means: items 1, 3 are in bin 1, item 2 is in bin 2, items 4, 5 are in bin 3. Fitness: minimise difference between heaviest and lightest bin. In this case what is the fitness? Mutation: choose a gene at random, and change it to a random new value. Crossover: choose a random crossover point c (genes 1, 2, 3, and 4); child is copy of parent 1 up to and including gene c, it is copy of parent 2 thereafter. E.g. Crossover of 3,1,2,3,3 and 2,2,1,2,1 at gene 2: 3,1,1,2,1 Crossover of 1,1,3,2,2 and 2,3,1,11 at gene 2: 1,1,1,1,1

  13. Back to Basics With your thirst for seeing example EAs temporarily quenched, the story now skips to simpler algorithms. This will help to explain what it is about the previous ones which make them work.

  14. The Travelling Salesperson Problem(also seehttp://francisshanahan.com/tsa/tsaGAworkers.htm ) An example (hard) problem, for illustration The Travelling Salesperson Problem Find the shortest tour through the cities. The one below is length: 33 B C E D A

  15. Simplest possible EA: Hillclimbing • 0. Initialise: Generate a random solution c; evaluate its • fitness, f(c). Call c the current solution. • 1. Mutate a copy of the current solution – call the mutant m • Evaluate fitness of m, f(m). • 2. If f(m) is no worsethan f(c), then replace c with m, • otherwise do nothing (effectively discarding m). • 3. If a termination condition has been reached, stop. • Otherwise, go to 1. Note. No population (well, population of 1). This is a very simple version of an EA, although it has been around for much longer.

  16. Why “Hillclimbing”? Suppose that solutions are lined up along the x axis, and that mutation always gives you a nearby solutions. Fitness is on the y axis; this is a landscape 9 6 10 7 5, 8 4 3 1 2 • Initial solution; 2. rejected mutant; 3. new current solution, • 4. New current solution; 5. new current solution; 6. new current soln • 7. Rejected mutant; 8. rejected mutant; 9. new current solution, • 10. Rejected mutant, …

  17. Example: HC on the TSP We can encode a candidate solution to the TSP as a permutation Here is our initial random solution ACEDB with fitness 32 Current solution B C E D A

  18. Here is our initial random solution ACEDB with fitness 32. We are going to mutate it – first make Mutant a copy of Current. Example: HC on the TSP We can encode a candidate solution to the TSP as a permutation Current solution Mutant B B C C E D E A D A

  19. HC on the TSP We randomly mutate it (swap randomly chosen adjacent nodes) from ACEDB to ACDEB which has fitness 33 -- so current stays the same Because we reject this mutant. Current solution Mutant B B C C E D E A D A

  20. HC on the TSP We now try another mutation of Current (swap randomly chosen adjacent nodes) from ACEDB to CAEDB. Fitness is 38, so reject that too. Current solution Mutant B B C C E D E A D A

  21. HC on the TSP Our next mutant of Current is from ACEDB to AECDB. Fitness 33, reject this too. Current solution Mutant B B C C E D E A D A

  22. HC on the TSP Our next mutant of Current is from ACEDB to ACDEB. Fitness 33, reject this too. Current solution Mutant B B C C E D E A D A

  23. HC on the TSP Our next mutant of Current is from ACEDB to ACEBD. Fitness is 32. Equal to Current, so this becomes the new Current. Current solution Mutant B B C C E D E A D A

  24. B C E D A HC on the TSP ACEBD is our Current solution, with fitness 32 Current solution

  25. B C E D A HC on the TSP ACEBD is our Current solution, with fitness 32. We mutate it to DCEBA (note that first and last are adjacent nodes); fitness is 28. So this becomes our new current solution. Current solution Mutant B C E D A

  26. HC on the TSP Our new Current, DCEBA, with fitness 28 Current solution B C E D A

  27. HC on the TSP Our new Current, DCEBA, with fitness 28 . We mutate it, this time getting DCEAB, with fitness 33 – so we reject that and DCEBA is still our Current solution. Mutant Current solution B B C C E E D D A A

  28. And so on …

  29. Landscapes f(s) Recall S, the search space, and f(s), the fitness of a candidate in S members of S lined up along here The structure we get by imposing f(s) on S is called a landscape What does the landscape look like if f(s) is a random number generator? What kinds of problems would have very smooth landscapes? What is the importance of the mutation operator in all this?

  30. Neighbourhoods Let s be an individual in S, f(s) is our fitness function, and M is our mutation operator, so that M(s1)  s2, where s2 is a mutant of s1. Given M, we can usually work out the neighbourhood of an individual point s – the neighbourhood of s is the set of all possible mutants of s E.g. Encoding: permutations of k objects (e.g. for k-city TSP) Mutation: swap any adjacent pair of objects. Neighbourhood: Each individual has k neighbours. E.g. neighbours of EABDC are: {AEBDC, EBADC, EADBC, EABCD, CABDE} Encoding: binary strings of length L (e.g. for L-item bin-packing) Mutation: choose a bit randomly and flip it. Neighbourhood: Each individual has L neighbours. E.g. neighbours of 00110 are: {10110, 01110, 00010, 00100, 00111}

  31. Landscape Topology Mutation operators lead to slight changes in the solution, which tend to lead to slight changes in fitness. I.e. the fitnesses in the neighbourhood of s are often similar to the fitness of s. Landscapes tend to be locally smooth What about big mutations ?? It turns out that ….

  32. Typical Landscapes f(s) members of S lined up along here Typically, with large (realistic) problems, the huge majority of the landscape has very poor fitness – there are tiny areas where the decent solutions lurk. So, big random changes are very likely to take us outside the nice areas.

  33. Quiz Questions Here is an altered distance matrix for the TSP problem in these slides. Illustrate five steps of Hillclimbing, in the same way done in these slides (inventing your own random mutations etc…), using this altered matrix. Consider the example encoding and mutation operator given in these slides for the equalize-bins problem, and imagine nitems = 100 and N (number of bins) is 10. What is the neighbourhood size?

More Related