1 / 54

Biologically Inspired Computation

Biologically Inspired Computation. Really finishing off EC. But first:. Finishing off encodings Information about mandatory reading Information about CW2. E.g. encoding a timetable I. 4, 5, 13, 1, 1, 7, 13, 2. Exam2 in 5 th slot. Exam1 in 4 th slot. Etc ….

fay
Download Presentation

Biologically Inspired Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biologically Inspired Computation Really finishing off EC

  2. But first: • Finishing off encodings • Information about mandatory reading • Information about CW2

  3. E.g. encoding a timetable I 4, 5, 13, 1, 1, 7, 13, 2 Exam2 in 5th slot Exam1 in 4th slot Etc … • Generate any string of 8 numbers between 1 and 16, • and we have a timetable! • Fitness may be <clashes> + <consecs> + etc … • Figure out an encoding, and a fitness function, and • you can try to evolve solutions.

  4. E.g. encoding a timetable II 4, 5, 13, 1, 1, 7, 13, 2 Etc … Use the 13th clash-free slot for exam3 Use the 5th clash-free slot for exam2 (clashes with E4,E8) Use the 4th clash-free slot for exam1

  5. So, a common approach is to build an encoding around an algorithm that builds a solution • Don’t encode a candidate solution directly • … instead encode parameters/features for a constructive algorithm that builds a candidate solution

  6. e.g.: bin-packing – given collection of items, pack them into the fewest possible number of bins

  7. e.g.: bin-packing – given collection of items, pack them into the fewest possible number of bins

  8. Engineering Constructive Algorithms A typical constructive algorithm for bin-packing: Put the items in a specific sequence (e.g. smallest to largest) Initialise solution Repeat nitems times choose next item, place it in first bin it fits (create a new empty bin if necessary) Indirect Encodings often involve using a constructive algorithm,

  9. Example using ‘First-Fit Ascending’ (FFA) constructive algorithm for bin-packing FFA

  10. First-fit Ascending

  11. First-fit Ascending

  12. First-fit Ascending

  13. First-fit Ascending

  14. First-fit Ascending

  15. First-fit Ascending

  16. Example using First-Fit Descending First-fit Descending

  17. First-Fit Descending

  18. First-Fit Descending

  19. First-Fit Descending

  20. First-Fit Descending

  21. First-Fit Descending

  22. Notes: • In other problems, FFA gets better results than FFD • There are many other constructive heuristics for bin packing, e.g. Using formulae that choose next item based on distribution of unplaced item sizes and empty spaces ... • There are constructive algorithms for most problems (e.g. Timetabling, scheduling, etc ....) • Often, ‘indirect encodings’ for EAs use constructive algorithms. • Common approach: the encoding is permutation, and the solution is built using the items in that order • READ THE FALKENAUER PAPER TO SEE GOOD EA ENCODING FOR BIN-PACKING

  23. Encodings that use constructive algorithms The indirect encoding for timetabling, a few slides ago, is an example. The ‘underlying’ constructive algorithm is: Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes This provides only a single solution, the same every time we run it. This may be very bad in terms of other factors, such as consecutive exams, time allowed for marking, etc. How did we modify it so that it could generate a big space of different solutions?

  24. Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

  25. Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

  26. Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the Nth place it can go without clashes The chromosome encodes each of the Ns. The original constructive algorithm corresponds to running the above on the chromosome “111111111….”. We could also engineer the original constructive algorithm into an encoding in a quite different way. How?

  27. Encodings that use constructive algorithms Line up the exams in order e1, e2, … eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes

  28. Encodings that use constructive algorithms Randomly permute the exams e1, …, eN Repeat until all exams are scheduled: take the next exam in the list, and put it in the first place it can go without clashes This is a fine constructive algorithm, which will provide a different solution depending on the permutation. It is easily used as an encoding: the chromosome provides the permutation.

  29. Other well known constructive algorithms • Prim’s algorithm for building the minimal spanning tree (see an earlier lecture) is an example. • Djikstra’s shortest path algorithm is also an example. • In both of these cases, the optimal solution is guaranteed to be found, since MST and SP are easy problems. • But usually we see constructive methods used to give very fast `OK’ solutions to hard problems.

  30. On engineering constructive methods Some Constructive Heuristics are deterministic. I.e. they give the same answer each time. Some are stochastic – I.e. they may give a different solution in different runs. Usually, if we have a deterministic constructive method such as FFD, we can engineer a stochastic version of it. E.g. instead of choosing the next-lightest item in each step, we might choose randomly between the lightest three unplaced items.

  31. Bin packing example direct encoding: 2, 3, 2, 3, 1 ....  item 1 is in bin 2, item 2 is in bin 3, item 3 is in bin 2, etc... (often a bin will be over capacity, so fitness function will have to include penalties) Bin packing example indirect encoding: candidate solution is a perm of the items, e.g. 4, 2, 1, 3, 5 ...meaning: First place item 4 in the first available bin it fits in, then place item 2 in the first available ... etc.

  32. Direct vs Indirect Encodings Direct: • straightforward genotype (encoding)  phenotype (actual solution) mapping • Easy to estimate effects of mutation • Fast interpretation of chromosome (hence speedier fitness evlaluation) Indirect/Hybrid: • Easier to exploit domain knowledge – (e.g. use this in the constructive heuristic) • Hence, possible to `encode away’ undesirable features • Hence, can seriously cut down the size of the search space • But, slow interpretation • Neighbourhoods are highly rugged.

  33. Example real-number Encoding (and: How EAs can innovate, rather than just optimize) D1, D2, D3, D4 D5 D6 D1 >= D2 >= D3, D4 <= D5 <= D6 Fixed at six diameters, five sections Design shape for a two-phase jet nozzle

  34. A simple encoding 2, 1.8, 1.1, 1.3 1.3 1.5 The encoding enforces these constraints: D1 >= D2 >= D3, D4 <= D5 <= D6 Fixed at six diameters, five sections

  35. A more complex encoding– bigger search space, slower, but potential for innovative solutions Num sections before smallest Section diameters Z1, Z2, D1, D2, D3 Dsmall…, Dn, Dn+1, … Num sections after smallest Middle section constrained to be smallest, That’s all Mutations can change diameters, add sections, and delete sections

  36. Mandatory reading slides for - Operators (typical mutation and crossover operators for different types of encoding) - Selection (various standard selection methods) - More encodings

  37. About CW2

  38. About CW2

  39. About CW2 Pamela Hardaker, Benjamin N. Passow and David Elizondo. Walking State Detection from Electromyographic Signals towards the Control of Prosthetic Limbs UKCI 2013 got signals like this, but on her thigh just above the knee: running walking standing

  40. Current knee-joint prospects need manual intervention to change between standing/walking/running modes (the wearer presses a button) Can we train a Neural Network to automatically detect when to change, on the basis of nerve signals from the last 30ms ?

  41. About CW2 Snapshot of Pamela’s data Time signal state

  42. Max signal strength in last 30ms Max signal strength in last 20ms Max signal strength in last 10ms range of sig in last 30ms range … last 20ms range … last 10ms mean … last 30ms mean … last 20ms mean … last 10ms About CW2 Snapshot of Pamela’s data Time signal state Outputs: 1 0 0, 0 1 0 or 0 0 1 (standing) (waking) (running)

  43. Max signal strength in last 30ms Max signal strength in last 20ms Max signal strength in last 10ms range of sig in last 30ms range … last 20ms range … last 10ms mean … last 30ms mean … last 20ms mean … last 10ms About CW2 Snapshot of Pamela’s data Time signal state CW2: evolve a neural network that predicts the state.

  44. What you will do From me, you get: Working NN code that does the job already What you will do: Implement new mutation and crossover operators within my code, and test them on Pamela’s data Write a report comparing the performance of the different operators

  45. If time, a look at the key bits of those operator slides …

  46. Operators for real-valued k-ary encodings Here the chromosome is a string of k real numbers, which each may or may not have a fixed range (e.g. between −5 and 5). e.g. 0.7, 2.8, −1.9, 1.1, 3.4, −4.0, −0.1, −5.0, … All of the mutation operators for k-ary encodings, as previously described in these slides, can be used. But we need to be clear about what it means to randomly change a value. In the previous slides for k-ary mutation operators we assumed that a change to a gene mean to produce a random (new) value anywhere in the range of possible values …

  47. Operators for real-valued k-ary encodings that’s fine … we can do that with a real encoding, but this means we are choosing the new value for a gene uniformly at random.

  48. Mutating a real-valued gene using a uniform distrution Range of allowed values for gene: 0—10 New value can be anywhere in the range, with any number equally likely.

  49. But, in real-valued encodings, we usually use Gaussian (‘Normal’) distributions … so that the new value is more likely than not to be close to the old value. typically we generate a perturbation from a Gaussian distribution, like this one, and add that perturbation to the old value. 0.3 0.2 Probability of choosing this perturbation 0.1 0 −1 −0.5 0 0.5 1 perturbation

  50. Mutation in real-valued encodings Most common is to use the previously indicated mutation operators (e.g. single-gene, genewise) but with Gaussian perturbations rather than uniformly chosen new values.

More Related