1 / 12

Population-based metaheuristics

Population-based metaheuristics. Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the current one using one these methods – by replacement which is a selection process from the new and current solutions

tova
Download Presentation

Population-based metaheuristics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Population-based metaheuristics • Nature-inspired • Initialize a population • A new population of solutions is generated • Integrate the new population into the current one using one these methods – by replacement which is a selection process from the new and current solutions • Evolutionary Algorithms – genetic algorithm • Estimation of distribution algorithm (EDA) • Scatter search • Evolutionary programming- genetic programming • Swarm Intelligence • Ant colony • Particle swarm optimization (PSO) • Bee colony • Artificial Immune system AIS • Continue until a stopping criteria is reached • The generation and replacement process could be memoryless or some search memory is used

  2. Common concepts of Evolutionary Algorithm • Main search components are • Representation For Ex: in Genetic Algorithm GA, the encoded solution is called a chromosome. The decision variables within a solution are genes. The possible values of the genes are alleles and the position of the element (gene) within a chromosome is named locus. • Population Initialization • Objective function, also called fitness in EA terminology. • Selection strategy – which parents are chosen for the next generation with the objective of better fitness • Reproduction strategy – mutation, crossover, merge, or from a shared memory • Replacement strategy – using the best of the old and new population – survival of the fittest • Stopping criteria

  3. Genetic Programming • More recent approach than GA (1992, Koza from MIT) • Instead of fixed length strings as in GA, the individuals in GP are programs- nonlinear representation based on trees. • Crossover is based on subtree exchange and mutation is based on random changes in the tree • Parent selection is fitness proportional and replacement is generational. • One of the issue is the uncontrolled growth of trees – known as bloat • Used in machine learning and data mining tasks such as prediction and classification.

  4. Genetic Programming • GP uses terminal (leaves of a tree) and function (interior nodes) sets • The objective is to generate programs that perform a task • The programs grow or shrink in size at each iteration as the search progresses • 1) Generate an initial population of random compositions of the functions and terminals of the problem (computer programs).2) Execute each program in the population and assign it a fitness value according to how well it solves the problem.3) Create a new population of computer programs.i) Copy the best existing programsii) Create new computer programs by mutation.iii) Create new computer programs by crossover(sexual reproduction).4) The best computer program that appeared in any generation, the best-so-far solution, is designated as the result of genetic programming

  5. Genetic Programming Examples Symbolic regression problem (in statistics) • Obj is to find a program that fits a given data set • Discovers both the form of the model and its parameters. • Functions (interior nodes) are math operators +,-, *,/ • Terminals (leaves) are variables and constants • Evaluate the program and check deviation from • the known value of the function Z for several set s of parameter • x,y,z values. Minimize mean square error + + y * z x 5 * y x Z= (x*5)+(z+(x*y))+y

  6. Genetic Programming Examples • Artificial ant on the Santa Fe trail • 32X32 grid • Food pellets are mapped on some cells. • Find a shortest path to maximize food intake (number of squares or dots) • The if constraints are sensing functions for locating food. If food then do x or y. If no food then do m or n • Terminal set would be forward left or right movements • Autonomous Navigation Control of UAVs Using Genetic Programming

  7. Genetic Programming Examples • A program to control flow of water through a network of water sprinklers • There are several sprinklers on a golf course and several valves that can turn on/off to let water flow through them. • The objective (fitness function) is to achieve correct amount of water output from each sprinkler and achieve uniform distribution (pressure). Measure fitness by placing measuring devices (e.g rain gauge) that record the amount of water collected then minimize the standard deviation among these devices. • Opening too many valves at a time will drop the pressure in the system (some areas may not get water), and too few will result in excess flow due to high pressure. • The terminal sets are sprinklers (leaves) and function sets interior nodes) are valves • The solution is to generate programs that open and close valves to achieve the above objective. • There are many solutions to evaluate

  8. Swarm Intelligence • Algorithms inspired by collective behavior of species • Ants, bees, termites etc. • Inspired from the social behavior of the species as they compete for food • Particles are • Simple, non-sophisticated agents • Agents cooperate by indirect communication • Agents move around in the decision space.

  9. Ant Colony Optimization • Ants transport food and find shortest paths • Use simple communicating methods • Leave a chemical trail – that is both olfactive and volatile • The trail is also called a pheromone trail • The trail guides the other ants toward the food • The larger the amount of pheromone then larger the probability that a trail will be chosen • The pheromone evaporates over time • In optimization • Initialize pheromone trails • Construct solutions using the pheromone trails • Pheromone update using generated solutions • Evaporation • reinforcement

  10. Ant Colony Optimization • The pheromone trails memorize the characteristics of good generated solutions, which guides the construction of new solutions • The trails change dynamically to reflect acquired knowledge • Evaporation phase: The trail decreases automatically. Each pheromone value is reduced by a fixed proportion • The goal is to avoid a premature convergence to the good solution for all the ants • Also helps in diversification • Reinforcement Phase: the pheromone trail is updated according to the generated solution

  11. ACO for TSP • n cities • G = (V,E) input graph • dij is the distance between i and j • For each ant randomly select an initial city i • Set up the pheromone matrix tij for edge i,j • tij represents the desirability of an edge (i,j) in a tour • Initialize tij=1 • Each ant will construct a tour in a stochastic way starting from city i • An ant will select the next city j using the probability pij = = 1/dij S is the set of not yet visited cities of the Graph G are parameters representing the influence of the pheromone and problem specific values such as the distance in TSP

  12. ACO for TSP • Continue until S is a null set • Update the pheromone using the quality –objfnc f(of solution from each ant or a set of k best ants where k<n • tij = tij + where is 1/ f(and (i, j ) are edges in solution • Good tours emerge as tijis updated by the ants • The pheromone is evaporated by tij = (1-tij where < 1 is the reduction rate of the pheromone • Repeat the process several times with a set of new ants

More Related