140 likes | 156 Views
Lecture 11. Constraint Handling. 학습목표 진화방식의 최적화 문제에서 제약조건을 부가하여 다루는 방법에 대하여 크게 3 가지로 나누어 이해. Outline. Review of the previous two lectures Unconstrained optimization Search bias and search operators Search step size and search operators
E N D
Lecture 11. Constraint Handling 학습목표 진화방식의 최적화 문제에서 제약조건을 부가하여 다루는 방법에 대하여 크게 3가지로 나누어 이해
Outline • Review of the previous two lectures • Unconstrained optimization • Search bias and search operators • Search step size and search operators • Examples: Gaussian mutation, Cauchy mutation, self-adaptation, quadratic recombination • Different types of constraints • Different types of constraint handling techniques • The penalty function approach • Summary
Problem Formulation • The general problem we consider here can be described as: Minx {f(x)} subject to: gi (x) <= 0, i = 1, 2, …, m hj (x) = 0, j = 1, 2, …, p where x is the n-d vector, x = (x1, x2, …, xn); f(x) is the objective function; gi(x) is the inequality constraint; hj(x) is the equality constraint. • Denote the whole search space as S • Denote the feasible search space as F • The global optimum in F may not be the same as that in S F <= S : feasible space is within the whole space S
Different Types of Constraint Handling Techniques • The penalty function approach: It converts a constrained problem into an unconstrained one, by penalizing constraint violations • The repair approach: It maps (repairs) an infeasible solution into a feasible one • The pure approach: It is pure because it does not search in the infeasible space at all. Only feasible solutions will be generated and examined • The separatist approach: It considers objective functions and constraints separately in the evolution. There is no fixed single fitness to measure an individual • The hybrid approach: It usually combines the evolutionary approach with another existing approach in dealing with constraints
unconstrained optimization The Penalty Function Approach: Introduction • The general formulation of the exterior penalty function: f(x) = f(x) +- (S ri Gi(x) + S cj Hj(x)) where f(x) is the new objective function to be minimized; f(x) is the original objective function; +- means it can be “+” or “-” depending on whether we minimize or maximize; Gi(x) = max (0, |gi(x)|b); Hj(x) = max (0, |hj(x)|u); b and u are usually chosen as 1 or 2 ri and cj are penalty factors (coefficients) New Objective Function = Original Objective Function + Penalty Factor * Degree of Constraint Violation
The Penalty Function Approach: Overview • Static penalties • The penalty function is pre-defined and fixed during evolution • Dynamic penalties • The penalty function changes according to a pre-defined sequence (or called function), which often depends on the generation number • Adaptive penalties and self-adaptive penalties • The penalty function changes adaptively according to the current or previous populations. There is no fixed sequence to follow
Summary • We mainly looked at numerical problems in this lecture • Constraints can be classified into • Linear vs. nonlinear, equality vs. inequality • There are different constraint handling techniques. Some of them are not unique to evolutionary algorithms • We examined different penalty methods • Static penalty, dynamic penalty, adaptive penalty, self-adaptive penalty • The key is how to balance the penalty term in relation to the objective term • References • T. Back, D.B. Fogel and Z. Michalewicz (eds), Handbook of Evolutionary Computation, IOP Pub., 1997 (C5.1~C5.6)