190 likes | 895 Views
Unconstrained Optimization. Objective: Find minimum of F( X ) where X is a vector of design variables We may know lower and upper bounds for optimum No constraints. Outline. General optimization strategy Optimization of second degree polynomials Zero order methods Random search
E N D
Unconstrained Optimization • Objective: Find minimum of F(X) where X is a vector of design variables • We may know lower and upper bounds for optimum • No constraints
Outline • General optimization strategy • Optimization of second degree polynomials • Zero order methods • Random search • Powell’s method • First order methods • Steepest descent • Conjugate gradient • Second order methods
General optimization strategy Start q=0 q=q+1 Pick search direction, Sq One dimensional search xq=x q-1+*qSq Converged ? Exit
Optimization of second-degree polynomials • Quadratic: F(X)=a11x12+a12x1x2+…+annxn2 = {X}T[A]{X} [A] is equal to one half the Hessian matrix, [H] • There is a linear transformation {X}=[S]{Y} such that: F(Y)= 1y12+...+ nyn2 (no coupling terms) • [S]: columns are eigenvectors of [A], S1, …,Sn • S1, …,Sn are also eigenvectors of [H]
Optimization of Second-degree polynomials • Define conjugate directionsS1, …,Sn • S1, …,Sn are otrhogonal ( i.e. their dot products are zero) because matrix [A] is symmetric • Note that conjugate directions are also linearly independent. The orthogonality property is stronger than the linear independence property; orthogonal vectors are always linearly independent but linearly independent vectors are not necessarily orthogonal. • i: eigenvalues of [A], which are equal to one half of the eigenvalues of the Hessian matrix
Optimization of second-degree polynomials • We can find the exact minimum of a second degree polynomial by performing n one-dimensional searches in the conjugate directions S1, …,Sn • If all eigenvalues of [A] are positive then a second degree polynomial has a unique minimum
Zero-order methods; random search • Random number generator: generates sample of values of variables drawn for a spcified probbility distribution. Available in most programming languages. • Idea: For F(x1,…, xn), generate random n-tuples {x11,…,xn1}, {x12,…,xn2},…, {x1N,…,xnN}. Find minimum.
Powell’s method • Efficient, reliable, popular • Based on conjugate directions, although it does not use Hessian matrix
Searching for optimum in Powell’s method S1 • First iteration:S1-S3 • Second iteration: S4-S6 • Directions S3, S6 conjugate • Present iteration: use last two search directions from previous iteration S2 S3 S4 S5 S6
Powell’s method: algorithm x0 One iteration, n+1 one dimensional searches Define set of n search directions Sq coordinate unit vectors, q=1,…,n Find conjugate direction Sq+1=xq-y x=x0, y=x Find * to min F(xq+ * Sq+1) q=0 xq+1= (xq+ * Sq+1) q=q+1 Y Converged ? Exit Find * to min F(xq-1+ * Sq) N Update search directions Sq=Sq+1 q=1,…,n xq= (xq-1+ * Sq) Y N y=xq+1 q=n ?
Powell’s method • Second degree polynomial; optimum in n iterations • Each iteration involves n+1 one-dimensional searches • n(n+1) one dimensional searches total
First-order methods: Steepest Descent • Idea: Search in the direction of the negative gradient, • Starting from a design move by a small amount. Objective function reduces most along the direction of
Determine steepest descent direction x0 S= - Find * to min F(x+ * S) Update design x=x+ * S No Converged ? Yes Stop Algorithm Perform one-dimesnional minimization in steepest descent direction
Steepest Descent • Pros: Easy to implement, robust, makes quick progress in the beginning of optimization. • Cons: Too slow toward end of optimization