260 likes | 562 Views
Optimization Techniques. Problem Description. Definition Modeling Solution algorithm. Problem Definition. Decision (independent) and dependent variables Constraints functions Objective functions. Solution Algorithms. Mathematical Techniques Heuristic Techniques.
E N D
Problem Description • Definition • Modeling • Solution algorithm
Problem Definition • Decision (independent) and dependent variables • Constraints functions • Objective functions
Solution Algorithms • Mathematical Techniques • Heuristic Techniques
Mathematical Algorithms • Calculus Methods • Linear Programming (LP) Method • Non Linear Programming (NLP) Method • Dynamic Programming (DP) Method • Integer Programming Method
Calculus Methods • Lagrange Multipliers • Kuhn-Tucker conditions
Linear Programming (LP) Method LP is an optimization method in which both the objective functionand the constraints are linear functions of the decision variables • Any LP problem can be stated as a minimization problem; due to the fact that, as already described, maximizing C(x) is equivalent to minimizing (-C(x)). • All constraints may be stated as equality type; due to the fact that any inequality constraint of the form given by can be transformed to equality constraints, given by • All decision variables can be considered nonnegative, as any xj, unrestricted in sign, can be written as where
Non Linear Programming (NLP) Method The objective functionand/or the constraints are nonlinear functions of the decision variables. Solution methods of unconstrained problems • direct search (or non-gradient) methods: steepest descent method • descent (or gradient) methods: Solution methods of constrained problems • Direct methods: constraint approximation • Indirect methods: penalty function
Dynamic Programming (DP) Method A mathematical technique used for multistage decision problems • Optimal decisions have to be made over some stages • The stages may be different times, different spaces, different levels, etc. • The output of each stage is the input to the next serial stage.
Integer Programming Method • If all decision variables are of integer type, the problem is addressed as Integer Programmingproblem. • If some decision variables are of integertype while some others are of non-integer type, the problem is known as mixed integer programming problem.
Heuristic Algorithms • Genetic Algorithm (GA), based on genetics and evolution, • Simulated Annealing (SA), based on some thermodynamics principles, • Particle Swarm (PS), based on bird and fish movements, • TabuSearch (TS), based on memory response, • Ant Colony (AC), based on how ants behave.
Genetic Algorithm (GA):Main Steps • Initialization of genetic algorithm • Selection • Mutation • Cross over • Termination
GA: Initialization • Individual solutions are randomly generated to form an initial population • The population size depends on the nature of the problem
GA: Selection • Roulette Wheel Selection • Rank Selection • Steady-State Selection • Elitism
GA: Mutation • Bit string mutation • Flip Bit • Boundary • Non-Uniform • Uniform • Gaussian
GA: Cross Over • One-point crossover • Two-point crossover • Cut and splice • Uniform Crossover and Half Uniform Crossover
Termination • A solution is found that satisfies minimum criteria • Fixed number of generations reached • Allocated budget (computation time/money) reached • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results • Manual inspection • Combinations of the above
Simulated Annealing (SA) Step 1: Initialize – Start with a random initial placement. Initialize a very high “temperature”. Step 2: Move – Perturb the placement through a defined move. Step 3: Calculate score – calculate the change in the score due to the move made. Step 4: Choose – Depending on the change in score, accept or reject the move. The probe of acceptance depending on the current “temperature”. Step 5: Update and repeat– Update the temperature value by lowering the temperature. Go back to Step 2. The process is done until “Freezing Point” is reached.
SA: main parameters • Initial temperature T0, • Number of transitions performed at each temperature level (Nk), • Final temperature, Tf(as the stopping criterion) • Cooling sequence (given by Tk+1= g(Tk) . Tk; where g(Tk) is a function which controls the temperature)
Particle Swarm (PS) • For each particle i = 1, ..., S do: • Initialize the particle's position with a random vector: xi ~ U(blo, bup), where blo and bup are the lower and upper boundaries of the search-space. • Initialize the particle's best known position to its initial position: pi ← xi • If (f(pi) < f(g)) update the swarm's best known position: g ← pi • Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|) • Until a termination criterion is met (e.g. number of iterations performed, or a solution with adequate objective function value is found), repeat:
Particle Swarm (PS) • For each particle i = 1, ..., S do: • Pick random numbers: rp, rg ~ U(0,1) • For each dimension d = 1, ..., n do: • Update the particle's velocity: vi,d ← ω vi,d + φprp (pi,d-xi,d) + φgrg (gd-xi,d) • Update the particle's position: xi ← xi + vi • If (f(xi) < f(pi)) do: • Update the particle's best known position: pi ← xi • If (f(pi) < f(g)) update the swarm's best known position: g ← pi • Now g holds the best found solution. The parameters ω, φp, and φg are selected by the practitioner and control the behavior and efficacy of the PSO method.
Tabu Search (TS) • Generate an initial solution, • Select move, • Update the solution. The next solution is chosen from the list of neighbors which is either considered as desired (aspirant) or not tabuand for which the objective function is optimum. • The process is repeated based on any stopping rule
Ant Colony (AC) • Initializationin which the problem variables, are encoded and initial population is generated; randomly within the feasible region. They will crawl to different directions at a radius not greater than R. • Evaluationin which the objective function is calculated for all ants. • Trail adding in which a trail quantity is added for each ant; in proportion to its calculated objective function (the so called fitness). • Ants sending in which the ants are sent to their next nodes, according to the trail density and visibility.
AC • We have already described trail density as the pheromone is deposited. The ants are not completely blind and will move to some extent based on node visibilities. These two actions resemble the steps involved in PS and TS algorithms (intensificationand diversification) to avoid trapping in local optimum points. • Evaporation in which the trail deposited by an ant is eventually evaporated and the starting point is updated with the best combination found. The steps are repeated until a stopping rule criterion is achieved.