480 likes | 493 Views
Learn about unconstrained multivariable optimization techniques, including function values, derivatives, and search directions to minimize functions effectively using analytical and numerical methods. Explore strategies for gradient-based methods like steepest descent and conjugate gradient, with termination criteria and conjugate search directions. Discover advanced methods like Fletcher-Reeves and Marquardt's for optimizing functions. Develop an understanding of BFGS update formula and other efficient approaches for optimizing functions without needing to invert the Hessian matrix.
E N D
Chapter 6 UNCONSTRAINED MULTIVARIABLE OPTIMIZATION Chapter 6
6.1 Function Values Only 6.2 First Derivatives of f (gradient and conjugate direction methods) 6.3 Second Derivatives of f (e.g., Newton’s method) 6.4 Quasi-Newton methods Chapter 6
General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Chapter 6 Steepest Descent Search Direction Don’t need to normalize Method terminates at any stationary point. Why?
So procedure can stop at saddle point. Need to show is positive definite for a minimum. • Step Length • How to pick a • analytically • numerically Chapter 6
Analytical Method How does one minimize a function in a search direction using an analytical method? It means s is fixed and you want to pick a, the step length to minimize f(x). Note Chapter 6 (6.9) This yields a minimum of the approximating function.
Numerical Method Use coarse search first (1) Fixed a (a = 1) or variable a (a = 1, 2, ½, etc.) Options for optimizing a (1) Use interpolation such as quadratic, cubic (2) Region Elimination (Golden Search) (3) Newton, Secant, Quasi-Newton (4) Random (5) Analytical optimization (1), (3), and (5) are preferred. However, it may not be desirable to exactly optimize a (better to generate new search directions). Chapter 6
Suppose we calculate the gradient at the point xT = [2 2] Chapter 6
Termination Criteria f(x) Big change in f(x) but little change in x. Code will stop if Dx is sole criterion. x f(x) Big change in x but little change in f(x). Code will stop if Dx is sole criterion. Chapter 6 x For minimization you can use up to three criteria for termination: (1) (2) (3)
Conjugate Search Directions • Improvement over gradient method for general quadratic functions • Basis for many NLP techniques • Two search directions are conjugate relative to Q if • To minimize f(xnx1) when H is a constant matrix (=Q), you are guaranteed to reach the optimum in n conjugate direction stages if you minimize exactly at each stage (one-dimensional search) Chapter 6
Conjugate Gradient Method by minimizing f(x) with respect to a in the s0 direction (i.e., carry out a unidimensional search for a0). Step 3.Calculate The new search direction is a linear combination of Chapter 6 For the kth iteration the relation is (6.6) For a quadratic function it can be shown that these successive search directions are conjugate. After n iterations (k = n), the quadratic function is minimized. For a nonquadratic function, the procedure cycles again with xn+1 becoming x0. Step 4. Test for convergence to the minimum of f(x). If convergence is not attained, return to step 3. Step n. Terminate the algorithm when is less than some prescribed tolerance.
Minimize using the method of conjugate gradients with as an initial point. In vector notation, For steepest descent, Chapter 6 Chapter 6 Steepest Descent Step (1-D Search) The objective function can be expressed as a function of a0 as follows: Minimizing f(a0), we obtain f= 3.1594 at a0 = 0.0555. Hence
Calculate Weighting of Previous step The new gradient can now be determined as and w 0 can be computed as Generate New (Conjugate) Search Direction Chapter 6 Chapter 6 and One dimensional Search Solving for a1 as before [i.e., expressing f(x1) as a function of a1 and minimizing with respect to a1] yields f= 5.91 x 10-10 at a1 = 0.4986. Hence which is the optimum (in 2 steps, which agrees with the theory).
Fletcher – Reeves Conjugate Gradient Method Chapter 6 Derivation:
Chapter 6 and solving for the weighting factor:
Marquardt’s Method Chapter 6
Step 1 Step 2 Chapter 6 Step 3 Step 4
Step 5 Step 6 Step 7 Chapter 6 Step 8 Step 9
Secant Methods Recall for one dimensional search the secant method only uses values of f(x) and f ′(x). Chapter 6
• Probably the best update formula is the BFGS update (Broyden – Fletcher – Goldfarb – Shanno) – ca. 1970 • BFGS is the basis for the unconstrained optimizer in the Excel Solver • Does not require inverting the Hessian matrix but approximates the inverse with values of Chapter 6