1 / 16

A discussion on the SQP and rSQP methodologies

A discussion on the SQP and rSQP methodologies. Kedar Kulkarni Advisor: Prof. Andreas A. Linninger Laboratory for Product and Process Design, Department of Bioengineering, University of Illinois, Chicago, IL 60607, U.S.A. Motivation.

ayame
Download Presentation

A discussion on the SQP and rSQP methodologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A discussion on the SQP and rSQP methodologies Kedar Kulkarni Advisor: Prof. Andreas A. Linninger Laboratory for Product and Process Design, Department of Bioengineering, University of Illinois, Chicago, IL 60607, U.S.A.

  2. Motivation • To understand SQP and rSQP to use it effectively to solve TKIP’s • To incorporate the reduced space Range and null space decomposition as a modification to the present methodology to solve the TKIP

  3. Outline of the talk • Successive Quadratic Programming (SQP) as a method to solve a general Nonlinear Programming (NLP) problem • The rSQP as an improvement over the SQP - Introduction to rSQP (Derivation) - Case study • Recap

  4. SQP: • Solves a sequence of QP approximations to a NLP problem • The objective is a quadratic approximation to the Lagrangian function • The algorithm is simply Newton’s method applied to solve the set of • equations obtained on applying KKT conditions! Consider the general constrained optimization problem : KKT

  5. SQP (contd): • Considering this as a system of equations in x*,* and *, we write the • following Newton step • These equations are the KKT conditions of the following optimization • problem! • This is a QP problem. Its solution is determined by the properties of the Hessian-of-the-Lagrangian

  6. SQP (contd): • The Hessian is not always positive definite => Non convex QP, not • desirable • Remedy: At each iteration approximate the Hessian with a matrix that • is symmetric and positive definite • This is a Quasi-Newton secant approximation • Bi+1 as a function of Bi is given by the BFGS update:

  7. Suitable modification: • Choose  to ensure progress towards optimum: •  is chosen by making sure that a merit function is decreased at each • iteration: exact penalty function, augmented Lagrangian Exact penalty function: • Newton-like convergence properties of SQP: • - Fast local convergence • - Fewer function evaluations

  8. Basic SQP algorithm:

  9. SQP: A few comments • State-of-the-art in NLP solvers, requires fewest function iterations • Does not require feasible points at intermediate iterations • Not efficient for problems with a large number of variables (n > 100) • Computational time for each iteration goes up due to presence of dense matrices • Reduced space methods (rSQP, MINOS), large scale adaptations of SQP

  10. Introduction to rSQP: Consider the general constrained optimization problem again: SQP iteration i KKT conditions z = [xT sT] The second row is: n > m; (n-m) free, m dependent • To solve this system of equations we could exploit the properties of the null space of the matrix A. Partition A as follows: Where N is m X (n-m) and C is m X m • Z is a member of the null space: • Z can be written in terms of N and C as: Check AZ = 0 !!

  11. Introduction to rSQP: • Now choose Y such that [Y | Z] is a non-singular and well-conditioned matrix -- co-ordinate basis • It remains to find dY (m X 1) and dZ ((n - m) X 1). Let d = YdY + ZdZ in the optimality condition of the QP Where • The last row could be used to solve for dY: • This value could be used to solve for dZ using the second row: -- (n - m) equations for components of dZ • This is okay if there were no bounds on z. If there are bounds too, then minimize the energy:

  12. Basic rSQP algorithm:

  13. Case study: • At iteration i consider the following QP sub-problem: n=3, m=2 • Comparing with the standard form: Choose

  14. Case study: • Thus Z can be evaluated as: Check AZ = 0 !! • Now choose Y as: • Rewrite the last row : where • Solve for dY : • Now, we have Y, Z, dY. To calculate dZ

  15. Case study: • Solve: • The components: • Finally:

  16. rSQP: A few comments • Basically, solve for dY (using sparse linear algebra) and dZ (as a solution of QP sub problem) separately instead of directly solving for d (as a solution of QP sub problem) • The full Hessian does not need to be evaluated. We deal only with the reduced (projected) Hessian ZTBZ ((n-m) X (n-m)) • Local convergence properties are similar for both SQP and rSQP Recap: Newton’s Method f**(x)=0 KKT Optimality condition Quadratic approx. to Lagrangian P1 P2 P3 P4 NLP f(x)=0 f*(x)=0 KKT P5 P6 Range and null space rSQP sub problem QP sub problem

More Related