520 likes | 657 Views
Genetic Algorithms for Dynamic Combinatorial Problems. Outline. D efinitions Categorization Difficulties Diversity Implementation Cost Robustness & Flexibility TSP BM Generator Results. Dyn. Environments GA & Dyn. Problems Experimentation Future Work. at t1. Fitness.
E N D
Outline Definitions Categorization Difficulties Diversity Implementation Cost Robustness & Flexibility TSP BM Generator Results • Dyn. Environments • GA & Dyn. Problems • Experimentation • Future Work
at t1 Fitness • whenever the environment changes it is very likely that the optimal solution changes as well • If it changes with time then it is a Dynamic problem • An optimization problem consists of • Optimization goal(s) • Decision variables • Restrictions • Any change in ingredients Change in optimum • If it changes with time then it is a Dynamic problem at t2 Fitness at t3 Fitness Dynamic Environments • Dynamic Problem, a definition • Real world problems are dynamic in nature
Vehicle Routing • Job shop scheduling Routes added and deleted • Cost of route changes • Vehicles break down • New Jobs arrive continuously • Raw material changes • Procedures, frequency , tools modified Dynamic Environments • Real world problems are dynamic in nature
Dynamic Environments • Adding dynamism brings new challenges Not all Dynamic problems are interesting
Dynamic Environments • What makes a dynamic problem interesting • Information on the problem is time-dependent. • Finding solutions while time proceeds concurrently with incoming information. • Change is not too large and permits partial reuse of old solutions. The interesting dynamic problem requires an approach which is adaptive to changes
7 2 1 Pop. far from new opt. Pop. converged to opt. Meta-heuristics and Dynamic problems Difficulties • Originally developed for static problems • When considering dynamic problems, the difficulty: population tends to converge near the optimum
GA, Overview “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions you might not otherwise find in a lifetime.” - Salvatore Mangano Computer Design, May 1995 • An effective and flexible optimization tool • Manipulates a set of candidate solutions • Mimics the evolutionary process in nature
GA and Dynamic problems Why GA for Dynamic Problems • robust , Good for “noisy” environments • Easily exploit previous or alternate solutions • Modular, separate from application • Supports multi-objective optimization • Evolutionary technique … adaptive to changes
10 GA and Dynamic problems Evolutionary optimization in dynamic environments • (Branke 2002) Increasing interest How do GA’s approach the dynamic problem?
Approaching Dyn. Problems • “Ignore” dynamism: no more exploitation
Approaching Dyn. Problems • “Ignore” dynamism: no more exploitation • “Restart” from the beginning: no exploitation, • Straight forward… But • Time consuming • No adaptation ... old knowledge discarded • Not suitable if : • Changes not too large. Permitting partial reuse of old sol. • Changes can’t be detected directly • Continuous changes.No benefits from restarting every gen. • Available time doesn’t permit a restart from scratch. • Part of the old solution has already been implementedcv
Approaching Dyn. Problems • “Ignore” dynamism: no more exploitation • “Restart” from the beginning: no exploitation, • “Adapt old solutions”: • Partial Restart ( random immigrants) • Hyper mutation: Scatter the population • Use Explicit Memory: Save old solutions & seed Still new, ample opportunity to: - Refine, Combine, Add - Examine on combinatorial problems
Dynamic Comb. Issues • Benchmark Problems • Adaptation Cost vs. Solution Quality A multi-objective problem • When adapting old solutions not possible… Choose the most robust • When several adaptable optima… choose the most flexible
Objectives • Test Dynamic TSP using an adaptive form of GA • Test two mutation models in dynamic landscapes: -Traditional Mutation - Adaptive (Dynamic) Mutation
Benchmark Generator • Generates a sequence of static problems. • Solves each one separately generations (time) s1 problem 1 s2 problem 2 problem 3 s3 S1, S2, S3, … are optimal or “near” optimal solutions
d1 • d1, d2, d3, … will be solutions of a dynamic solver d2 d3 Benchmark Generator • Later, the sequence of static problems is introduced as sub-problems of one dynamic problem problem 1 problem 2 problem 3 The goodness of the dynamic solver is measured as how close d1, d2, d3, … are to S1, S2, S3, …
Landscape • All the optima shift randomly over time • Three general modes of shift • Edge Change: Change the distance b/w cities (traffic jam). • Add/Delete cities: adding or canceling assignments. • City Swap: interchange labels of two cities. • The user controls how cost changes • Severity ( # of steps in any change ) • Frequency ( # of generations between changes ) • Cycling (remove changes in reverse order)
Dynamic Solver, setting • Each experiment used : • a generational GA hybridized with LS • path representation • Tournament selection ( tournament size = 2) with Elitism • 2point Order Crossover • varying mutation rate • Population size = 50 • 200 different instances in 3000-generation runs. • Severity: 1, 10, 100 steps per shifts • Frequency: 10,100, 1000 generations between shifts • Statistics based on 10 runs per experiment
GA… Mutation Models • Test two simple mutation models are tested: - Traditional Fixed Mutation FM. P= constant - Dynamic Variable Mutation VM P = P0 at change in environment P = 0 at the next change • Several values of P and P0 were tested
Goal Results Cost changes randomly
Goal Results Cost changes randomly, continued.
Goal Results Cost changes randomly, continued.
Results Cost changes randomly, continued. Goal
Goal Results Leg cost increased
Goal Results Leg cost increased , continued.
Conclusions • Optimization of Dynamic problems is growing. Needs further research • GA’s almost used exclusively in static applications… although their concept may suggest otherwise • Not all dynamic problems are challenging • DTSP was approached using an adaptive HGA • BM generator was developed for DTSP • VM showed some improvements over FM • High values of initial mutations are recommended
Conclusions Future work • Enhance the VM: mut. rate = f(performance) • Extend the scope from TSP to VRP • Compare HGA with other techniques, CPUT • Classifying and Prediction
Output Tracking Optimum Input Time Series Optimization GA Classifying and Prediction Predicting Changes ANN Classifying Input ANN Thank You
Genetic Algorithms for Dynamic Vehicle Routing Problem. 31 --------------------
Genetic Algorithms for Dynamic Vehicle Routing Problem. 32 Recent Developments • Adaptation of Genetic operators for dynamic problems (Back 1997 Grefenstette 1999) • Hybridization of a GA and local search for VRPTW( Braysy 2000) • Adaptive Tabu Search for dynamic VRPTW (Gendreau 1999) Little on GA in dynamic functions Nothing on VRP
Evolvability in Dyn. Fitness Landscapes: GA Approach. 33 Objectives & Previous Work … • Adapting the operators through externally imposed heuristics (Davis 1989, Back 1992) • Self-adapting mutation rates in static problems (Back and Schwefel 1993) • Self-adaptation of Genetic operators for searching dynamic fitness landscape (Back 1997)
Evolvability in Dyn. Fitness Landscapes: GA Approach. 34 Current Best FHGHFMGM Gradual Generations Base mutation rate = 0.03 Current Best Abrupt Generations Tracking Performance on dynamic landscape Results Lo performance since initial pop is random • How well the moving optimum is tracked? Performance of ordinary mut. models starts to deteriorate after 50 gen • Gradual shift • Hyper mut. is better than ordinary • Abrupt shift • Ordinary mut. unable to explore adequately Performance deteriorates suddenly every 20 generations
Evolvability in Dyn. Fitness Landscapes: GA Approach. 35 Current Best FHGHFMGM Base mutation rate Online Performance Base mutation rate Base mutation rate Results… Hyper mut. Gives better performance than ordinary mut. 0.1 • Changing the base mut. Rate In gradual shifting LS GH & FH nearly same performance Too high mutation rate lowers performance Model approaches rand. search FM has Lo performance Improves beyond rate .o3
Evolvability in Dyn. Fitness Landscapes: GA Approach. 36 Results… • Changing the base mut. Rate In abrupt shifting LS • Similar performance to gradual LS • FH is best • GH FH • FM improves after rate .03 • All models deteriorate beyond rate 0.1
Evolvability in Dyn. Fitness Landscapes: GA Approach. 37 Current Best Percentage of pop under Hyp • The level of hypermutation: • decreases as population converges near optimum. • Increases when landscape shifts Results… • Relation between change in LS & level of hyper mutation
Evolvability in Dyn. Fitness Landscapes: GA Approach. 38 Conclusions • Alternative models studied • Models with same-mutation level to all • Models, Genetically controlled mut • Hyper mutation models perform well in all LS. • Hyper mutation can be genetically controlled • When genetically controlled , level of hypermutation: • decreases as population converges near optimum. • Increases when landscape shifts
Adaptation from Fixed-Weight Dynamic Networks. 39 What’s Adaptation? • A characteristic that is often attributed to Intelligent Systems • Adaptation : to recognize change through inputs and to adjust accordingly • RMLP capable of adaptation (Cotter and Conwell) Our main question Can adaptive capability be induced directly from training ?
Adaptation from Fixed-Weight Dynamic Networks. 40 Results Training was difficult BUT performance was good • Network performance. • Interpolative and extrapolative performance. • Network performance for switching time series. • Network performance for noisy time series.
Categoriztion useful to ; know the strategy • And to appreciate the difficulty of BM design • ·Dynamic but not noisy • not noisey fitness .. noisy still approached as a static problem and the noise is treated in some specific way. • not covered here. • ·frequency of change • In practice, we actually need the not the period between changes but the time allowed to the GA to find the sol to the new instance. • average no of eval. is used iso time • ·Severity of change • It should be specified in conjucncion with the definition of neighborhood which in turn depends on the representation scheme of the individuals. In other words how many simple steps alterations or mutaions are to be applied on the old optimal solution in order to reach the new one. • ·Pattern of change • Studying the pattern of changes can give insight to predict the direction , frequency or severity of change. Such information can be used in advance by the algortgm to figure out the best approach to tackle to oncoming instances. Even if the pattern is completely random, knowing this fact might help in finding the proper strategy. • ·Repetitiveness • How often and how close does the old environment states are revisited? • The main purpose here is to decide whether to use an explicit memory to remember old solutions or not and what is the length of the list … SEE TS • And we add this categorization • ·Detectability • Are changes obvious i.e can be detected directly or not ? • Adding a new assighmnet , vehicle breakdown…. Are detectable directly. • While road jamming, deterioration in machine and manpower performance, and changes in quality of raw material are examples of envriomental changes that are not usually given explicitly. • If the changes are not given explicitly the algoritm might not react in time to these changes.. In these canses, some kind of indicators that monitor performance can be used to trigger reactions to changes. • Some of the used indicators are : • Deterioration of the population performance REF , Time averaged best performance REF . These indicators assume that environmental changes will reduce the fitness of the individuals… however this is not necessarily true… fitness values of all individuals might increase after a change in environment, in other cases the shift in enviornmnet might make the current population as a whole nearer to the new optimum and hence solution quality enhances. • In another method, used by Brankd 99, several individuals are revaluated every generation and a change in environment is detected if the fitness of at least one individuals has changed. • Others REF compare the actual environment with a maintained model and conclude that the environment had changed if the difference between the actual and model environments is significant. • =+++++++++++++++++ • Optimization in dynamic environments is gaining increasing interest from researches due to the simple fact that almost all real-world problems are dynamic to some degree or another. Metaheuristics that had proved their effectiveness for static problems are being modified by different adaptation strategies for the use in dynamic environments. In addition, benchmark problems were generated to model the dynamic environments. • The current paper tests a Genetic Algorithm under different adaptation strategies to tackle the Dynamic Travelling Salesman Problem. It is expected that the GA as an evolutionary technique will work well in with dynamic problems. Another contribution of this paper is a benchmark generator to create the dynamic instances necessary for testing and comparing these strategies. • With integer spaces it is not easy, as in real space problems, to develop functions with adjustable parameters to simulate a shifting landscape. Here, we need to think of the dynamic environment in terms of possible scenarios in which changes of a particular problem can happen over time. There can be an infinite number of such scenarios, which, we believe, is a reason behind the deficiency in benchmarks for dynamic combinatorial problems in general.
X- sections of fitness landscape at Generation 0 Fitness Amplitude (A) X at Generation 5 Center (C) Fitness x • Parameters A,C and S changed to create peaks with different widths, heights & locations X Width (S) at Generation 10 Fitness X Dynamic Landscape • With real space • Dynamism introduced by changing fitness landscape with Generations • it is relatively easy to create dynamic landscapes as time-varying functions : by altering a few runtime parameters, one can generate indefinite # of distinct landscapes with controllable characteristics
Genetic Algorithms for Dynamic Vehicle Routing Problem. 44 VRP, Overview • In the literature...since the late fifties.. • orders to customers dispersed. • elderly or disabled passengers • cargo between seaports • work-in process between workstations • Importance • transportation cost constitutes a large share. • Benefits to business & the country. Efficient routing of a fleet of vehicles to reduce transportation cost … that is the essence of VRP
TSP • Simply stated: if a traveling salesman wishes to visit exactly once each of a list of cities and then return to the home city, find the shortest route? • Intrigued researchers for years • Easy to describe, hard to solve • Typical of the NP-hard combinatorial problems • Often the case that TSP led to progress on other combinatorial problems
Robust Solutions • focus on finding robust solutions. if adapting old solutions is not possible Unstable Solution Robust Solution Robust solutions are those which function well over wide ranges of environmental changes.
Adaptation Not Possible • Environment changes too fast • Changes cannot be detected quickly enough, • Old solutions are already implemented. Examples • Specifications cannot be produced exactly. Tolerance needed • Scheduling: variation in processing times, malfunctions, or adding new jobs w/o a total reordering of production plan. • Control Problems: it may be difficult to detect gradual changes machines wear or raw material properties changes
Input Time Series • Use ANN to measure and classify the data • Use this classification to trigger which strategy the GA should use Classifying Input ANN Classifying • Several strategies in the literature to tackle dynamic problems: ignore, restart, adopt, … and hybridizations. • How good a strategy. depends on: speed of change, severity of change, repetitiveness, detect ability • Important to be able to have some measurements.
Predicting Changes ANN Input Time Series • Use ANN to study past pattern and try to predict changes Prediction • A dynamic problem requires finding solutions while time proceeds concurrently with incoming info. • Having insight to future info: 1) gives the GA the necessary time to solve Or 2) at least to switch to a better strategy
Optimization GA Classifying and Prediction Predicting Changes ANN Output Tracking Optimum Input Time Series Classifying Input ANN