410 likes | 677 Views
Differential Evolution. Hossein Talebi Hassan nikoo. Variations to Basic Differential Evolution. Hybrid Differential Evolution Strategies Gradient-Based Hybrid Differential Evolution Evolutionary Algorithm-Based Hybrids DE Reproduction Process used as Cross over Operation of Simple GA
E N D
Differential Evolution HosseinTalebi Hassan nikoo
Variations to Basic Differential Evolution • Hybrid Differential Evolution Strategies • Gradient-Based Hybrid Differential Evolution • Evolutionary Algorithm-Based Hybrids • DE Reproduction Process used as Cross over Operation of Simple GA • Ranked-Based Cross Over Operator for DE • Particle Swarm Optimization Hybrids • Population-Based Differential Evolution • Self-Adaptive Differential Evolution
Variations to Basic Differential Evolution • Differential Evolution for Discrete-Valued Problems • Angle Modulated Differential Evolution • Binary Differential Evolution • Constraint Handling Approaches • Multi-Objective Optimization • Dynamic Environments • Applications
Gradient-Based Hybrid Differential Evolution • acceleration operator to improve convergence speed – without decreasing diversity • migration operator • The Acceleration Operator uses gradient descent to adjust the best individual toward obtaining a better position
Gradient-Based Hybrid Differential Evolution(Cont.) • Using Gradient Descent may results in getting stuck in a local optima or premature convergence • We can increase Population diversity by Migration Operator • This operator spawns new individuals from the best individual and replaces the current population with these new individuals
Gradient-Based Hybrid Differential Evolution(Cont.) • The Migration Operator is applied only when diversity of population becomes too small
Using Stochastic Gradient Descent and DE for Neural Networks Training • Stochastic Gradient Descent
Evolutionary Algorithm-Based Hybrids • Hrstka and Kucerov´a used the DE reproduction process as a crossover operator in a simple GA • Chang and Chang used standard mutation operators to increase DE population diversity by adding noise to the created trial vectors.
Evolutionary Algorithm-Based Hybrids(Cont.) • Sarimveis and Nikolakopoulos [758] use rank-based selection to decide which individuals will take part to calculate difference vectors
Particle Swarm Optimization Hybrids • Hendtlass proposed that the DE reproduction process be applied to the particles in a PSO swarm at specified intervals. • Kannanet al. apply DE to each particle for a number of iterations and replaces the best with particle
Particle Swarm Optimization Hybrids(Cont.) • Another approach is to change only change best particle using • Where sigma is general difference vector
Population-Based Differential Evolution • Ali and T¨orn proposed to use an auxiliary population • For each offspring created, if the fitness of the offspring is not better than the parent, instead of discarding the offspring, it is considered for inclusion in the auxiliary Population
DETVSF (DE with Time Varying Scale Factor) • During the later stages it is important to adjust the movements of trial solutions finely so that they can explore the interior of a relatively small space in which the suspected global optimum lies • We can reduce the scale factor linearly with time from a (predetermined) maximum to a (predetermined) minimum value
Parameter Control in DE • Dynamic Parameters • Self-Adaptive
Self-Adaptive Parameters • probability of recombination be self –adapted • Mu is the average of successful probablities • Abbass Proposed to use this formula :
Self-Adaptive Parameters(Cont.) • Omranet al. propose a self-adaptive DE strategy that makes use of this formula for scale factor • For mutation operator
Angle Modulated Differential Evolution • Pampar´aet al. proposed a DE algorithm to evolve solutions to binary-valued optimization problems, without having to change the operation of the original DE • They use a mapping between binary-valued and continuous-valued space to solve the problem in binary space
Angle Modulated Differential Evolution(Cont.) • The objective is to evolve, in the abstracted continues space, a bitstring generating function will be used in the original space to produce bit-vector solutions • ‘a’, ’b’, ‘c’ and ‘d’ are continues space problem parameter
Angle Modulated Differential Evolution(Cont.) • ‘a=0’ ‘b=1’ ‘c=1’ ‘d=0’
Binary Differential Evolution • binDE borrows concepts from the binary particle swarm optimizer binPSO • binDE uses the floating-point DE individuals to determine a probability for each component • the corresponding bitstring solution will be calculated as follow :
Constraint Handling Approaches • Penalty methods • adding a function to penalize solutions that violate constraints • Using F(x, t) = f(x, t) + λp(x, t) where λ is the penalty coefficient and p is time dependent penalty function • Converting the constrained problem to an unconstrained problem
Constraint Handling Approaches(Cont.) • We can convert constrained problem to an unconstrained problem by defining the Lagrangian for the constrained problem • If primal problem is convex then defining dual problem and solving minmax problem
Constraint Handling Approaches(Cont.) • By changing selection operator , infeasible solutions can be rejected and we can use a method for repairing of the infeasible solution
Constraint Handling Approaches(Cont.) • Boundary constraints are easily enforced by clamping offspring to remain within the given boundaries
Multi-Objective Optimization • Converting the problem into the • Weighted Aggregation Methods
Multi-Objective Optimization(Cont.) • This method intends to define an aggregate objective function as a weighted sum of the objectives • Usually assumed that
Multi-Objective Optimization(Cont.) • There is no guarantee that different solutions will be found • A niching strategy can be used to find multiple solutions • It is difficult to get the best weight values, ωk, since these are problem-dependent
Multi-Objective Optimization(Cont.) • Vector evaluated DE is a population based method for MOO • If K objectives have to be optimized, K sub-populations are used, where each subpopulation optimizes one of the objectives. • Sub-populations are organized in a ring topology • The best individual of sub-population Ck migrates to population Ck+1 to produce the trial vectors for that population
Dynamic Environments • Assumptions • the number of peaks, nX, to be found are know and these peaks are evenly distributed through the search space • Changes are small and gradual • DynDE uses multiple populations, with each population maintaining one of the peaks
Dynamic Environments(Cont.) • At each iteration, the best individuals of each pair of sub-populations are compared if these global best positions are too close to one another, the sub-population with the worst global best solution is re-initialized
Dynamic Environments(Cont.) • The following diversity increasing strategies • Re-initialize the sub-populations • Use quantum individuals :Some of the individuals are re-initialized to random points inside a ball centered at the global best individual • Use Brownian individuals: Some positions are initialized to random positions around global best individual • Some individuals are simply added noise
Dynamic Environments(Cont.) • Initialization of Quantum Individuals
Applications • Mostly applied to optimize functions defined over continuous-valued landscapes • Clustering • Controllers • Filter design • Image analysis • Integer-Programming • Model selection • NN training
References • Computational Intelligence, an introduction,2nd edition, AndriesEngelbercht, Wiley • Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces, Rainer Storn,Kenneth Price,1995 • Particle Swarm Optimization and Differential Evolution Algorithms: Technical Analysis, Applications and Hybridization Perspectives,Swagatam Das1, Ajith Abraham2, and AmitKonar1,Springer 2008. • Differential Evolution, homepagehttp://www.icsi.berkeley.edu/~storn/code.html
Thanks For Your Attention Any Question?