120 likes | 262 Views
Optimality Conditions for Unconstrained optimization. One dimensional optimization Necessary and sufficient conditions Multidimensional optimization Classification of stationary points Necssary and sufficient conditions for local optima. Convexity and global optimality.
E N D
Optimality Conditions for Unconstrained optimization • One dimensional optimization • Necessary and sufficient conditions • Multidimensional optimization • Classification of stationary points • Necssary and sufficient conditions for local optima. • Convexity and global optimality
One dimensional optimization • We are accustomed to think that if f(x) has a minimum then f’(x)=0 but….
1D Optimization jargon • A point with zero derivative is a stationary point. • x=5, Can be a minimum A maximum An inflection point
Optimality criteria for smooth functions • Condition at a candidate point x* • f’(x*)=0 is the condition for stationarity and a necessary condition for a minimum. • f“(x*)>0 is sufficient for a minimum • f“(x*)<0 is sufficient for a maximum • With f”(x*)=0 needs information from higher derivatives. • Example?
Taylor series expansion • Expanding about a candidate minimum x* • This is the condition for stationarity
Conditions for minimum • Sufficient condition for a minimum is that • That is, the matrix of second derivatives (Hessian) is positive definite • Simplest way to check positive definiteness is eigenvalues: All eigenvalues need to be positive • Necessary conditions matrix is positive-semi definite, all eigenvalues non-negative
Types of stationary points • Positive definite: Minimum • Positive semi-definite: possibly minimum • Indefinite: Saddle point • Negative semi-definite: possibly maximum • Negative definite: maximum • Example of indefinite stationary point?
Global optimization • The function x+sin(2x)
Convex function • A straight line connecting two points will not dip below the function graph. Sufficient condition: Positive semi-definite Hessian everywhere. What does that mean geometrically?
Reciprocal approximation • Reciprocal approximation (linear in one over the variables) is desirable in many cases because it captures decreasing returns behavior. • Linear approximation • Reciprocal approximation
Conservative-convex approximation • At times we benefit from conservative approximations • All second derivatives of gC are non-negative • Called convex linearization (CONLIN), Claude Fleury