330 likes | 537 Views
CS B553: Algorithms for Optimization and Learning. Linear programming, quadratic programming, sequential quadratic programming. Key ideas. Linear programming Simplex method Mixed-integer linear programming Quadratic programming Applications. Radiosurgery. CyberKnife (Accuray).
E N D
CS B553: Algorithms for Optimization and Learning Linear programming, quadratic programming, sequential quadratic programming
Key ideas • Linear programming • Simplex method • Mixed-integer linear programming • Quadratic programming • Applications
Radiosurgery CyberKnife (Accuray)
Normal tissue Tumor Tumor Radiologically sensitive tissue
Tumor Tumor
Tumor Tumor
Optimization Formulation • Dose cells (xi,yj,zk) in a voxel grid • Cell class: normal, tumor, or sensitive • Beam “images”: B1,…,Bndescribing dose absorbed at each cell with maximum power • Optimization variables: beam powers x1,…,xn • Constraints: • Normal cells: DijkDnormal • Sensitive cells: DijkDsensitive • Tumor cells: DminDijkDmax • 0 xb 1 • Dose calculation: • Objective: minimize total dose
Linear Program • General form min fTx+gs.t. A x b C x = d A convex polytope A slice through the polytope
Three cases Infeasible Feasible, bounded Feasible, unbounded f f f ? x* x*
Simplex Algorithm (Dantzig) • Start from a vertex of the feasible polytope • “Walk” along polytopeedges while decreasing objective on each step • Stop when the edge is unbounded or no improvement can be made • Implementation details: • How to pick an edge (exiting and entering) • Solving for vertices in large systems • Degeneracy: no progress made due to objective vector being perpendicular to edges
Computational Complexity • Worst case exponential • Average case polynomial (perturbed analysis) • In practice, usually tractable • Commercial software (e.g., CPLEX) can handle millions of variables/constraints!
Soft Constraints Penalty Normal Dose Sensitive Tumor
Soft Constraints Auxiliary variable zijk: penalty at each cell zijkc(Dijk– Dnormal) zijk Dose zijk0 Dijk
Soft Constraints Auxiliary variable zijk: penalty at each cell zijkc(Dijk– Dnormal) zijk fijk Dose zijk0 Introduce term in objective to minimize zijk
Minimizing an Absolute Value • Absolute value Objective minx |x1| s.t. Ax b Cx = d x1 Constraints minv,xv s.t. Ax b Cx = d x1 v -x1 v x1
Minimizing an L-1 or L-inf norm Feasible polytope,projected thru F • L1 norm • Lnorm Fx* minx ||Fx-g||1 s.t. Ax b Cx = d g Feasible polytope,projected thru F minx ||Fx-g|| s.t. Ax b Cx = d Fx* g
Minimizing an L-1 or L-inf norm Feasible polytope,projected thru F • L1 norm Fx* minx ||Fx-g||1 s.t. Ax b Cx = d e g mine,x1Te s.t. Fx + Ie g Fx - Ie g Ax b Cx = d
Minimizing an L-2 norm Feasible polytope,projected thru F • L2 norm Fx* minx ||Fx-g||2 s.t. Ax b Cx = d g Not a linear program!
Quadratic Programming • General form min ½ xTHx + gTx + hs.t. A x b C x = d Objective: quadratic form Constraints: linear
Quadratic programs • H positive definite Feasible polytope H-1 g
Quadratic programs • H positive definite Optimum can lie off of a vertex! H-1 g
Quadratic programs • H negative definite Feasible polytope
Quadratic programs • H positive semidefinite Feasible polytope
Simplex Algorithm For QPs • Start from a vertex of the feasible polytope • “Walk” along polytopefacets while decreasing objective on each step • Stop when the facet is unbounded or no improvement can be made • Facet: defined by mn constraints • m=n: vertex • m=n-1: line • m=1: hyperplane • m=0: entire space
Active Set Method • Active inequalities S=(i1,…,im) • Constraints ai1Tx = bi1, …aimTx= bim • Written as ASx – bS= 0 • Objective ½ xTHx + gTx + f • Lagrange multipliers = (1,…,m) • Hx + g + AST = 0 • Asx - bS = 0 • Solve linear system: If x violates a different constraint not in S, add it If k<0 , then drop ik from S
Properties of active set methods for QPs • Inherits properties of simplex algorithm • Worst case: exponential number of facets • Positive definite H: polynomial time in typical case • Indefinite or negative definite H: can be exponential time! • NP complete problems
Applying QPs to Nonlinear Programs • Recall: we could convert an equality constrained optimization to an unconstrained one, and use Newton’s method • Each Newton step: • Fits a quadratic form to the objective • Fits hyperplanes to each equality • Solves for a search direction (x,) using the linear equality-constrained optimization How about inequalities?
Sequential Quadratic Programming • Idea: fit half-space constraints to each inequality • g(x) 0 becomes g(xt) + g(xt)T(x-xt) 0 xt g(x) 0 • g(xt) + g(xt)T(x-xt) 0
Sequential Quadratic Programming • Given nonlinear minimization • minx f(x)s.t.gi(x) 0, for i=1,…,mhj(x) = 0, for j=1,…,p • At each step xt, solve QP • minx ½xTx2L(xt,t,t)x + xL(xt,t,t)Txs.t.gi(xt) + gi(xt)Tx 0 for i=1,…,mhj(xt) + hj(xt)Tx= 0 for j=1,…,p • To derive the search direction x • Directions and are taken from QP multipliers
Illustration xt x g(x) 0 • g(xt) + g(xt)T(x-xt) 0
Illustration x xt+1 g(x) 0 • g(xt+1) + g(xt+1)T(x-xt+1) 0
Illustration x xt+2 g(x) 0 • g(xt+2) + g(xt+2)T(x-xt+2) 0
SQP Properties • Equivalent to Newton’s method without constraints • Equivalent to Lagrange root finding with only equality constraints • Subtle implementation details: • Does the endpoint need to be strictly feasible, or just up to a tolerance? • How to perform a line search in the presence of inequalities? • Implementation available in Matlab. FORTRAN packages too =(