460 likes | 618 Views
Stochastic Trust Region Gradient-Free Method (STRONG) -A Response-Surface-Based Algorithm in Stochastic Optimization via Simulation. Kuo-Hao Chang Advisor: Hong Wan School of Industrial Engineering, Purdue University.
E N D
Stochastic Trust Region Gradient-Free Method (STRONG)-A Response-Surface-Based Algorithm in Stochastic Optimization via Simulation Kuo-Hao Chang Advisor: Hong Wan School of Industrial Engineering, Purdue University Acknowledgement: The project was partially supported by grant from Naval Postgraduate School. Purdue University
Background Problem Statement Literatures Review STRONG Preliminary Numerical Evaluations Future Research Outline
Stochastic Optimization The minimization (or maximization) of a function in the presence of randomness Optimization via Simulation: No explicit form of the objective function (only observations from simulation), function evaluations are stochastic and usually computationally expensive. Applications Investment portfolio optimization, production planning, traffic control etc. Background
Problem Statement (I) • Consider the unconstrained continuous minimization problem The response can only be observed by : randomness defined in the probability space : the noisy term showing dependence on x
Problem Statement (II) • Given: a simulation oracle of capable generating s.t. Strong Law of Large Numbers hold for every • Find: a local minimizer , i.e., find having a neighborhood such that every satisfies
Problem Assumptions • For 1. 2. • For the underlying function 1. is bounded below and twice differentiable for every 2.
A RSM-based method with convergence property (combining the trust region method for deterministic optimization with the RSM) Does not require human involvement Appropriate DOE to handle high-dimensional problems (on-going work) Proposed Work
Stage I Employ a proper experimental design Fit a first-order model Perform a line search Move to a better solution Stage II (when close to the optimal solution) Employ a proper experimental design Fit a second-order model Find the optimal solution Response Surface Methodology
Deterministic Trust Region Framework (Conn et al. 2000) Suppose we want to minimize a deterministic objective function f(x) • Step 0: Given an initial point ,an initial trust-region radius , and some constants satisfy and set • Step 1: Compute a step within the trust region that “sufficiently reduces” the local model constructed by Taylor expansion (to second-order) around • Step 2: Compute if then define ; otherwise define • Step 3: Increment k by 1 and go to step 1.
Similarity Build a local model to approximate the response function and use it to generate the search function Differences TR Developed for deterministic optimization and has nice convergence property Cannot handle the stochastic case. Require explicit objective function, gradient and Hessian matrix RSM Can handle the stochastic case, has well-studied DOE techniques Human involvement is required; no convergence property. Combine these two methods. Comparison between RSM and TR
Stochastic TRust RegiONGradient-Free Method “Gradient-Free”:No direct gradient measurement. Rather, the algorithm is based on an approximation to the gradient. (Spall, 2003; Fu, 2005) Combine RSM and TR Consists of two algorithms Main algorithm: approach the optimal solution (major framework) Sub_algorithm: obtain a satisfactory solution within the trust region STRONG
Stochastic Trust Region • Use “response surface” model to replace Taylor’s expansion (deterministic model) (stochastic model) k: iteration counter • Use to replace
Trust Region & Sampling Region • Trust Region , : radius of Trust Region iteration k • Sampling Region , :radius of Sampling Region in iteration k • Initial and are determined by users in the initialization stage ( ); Later shrink/expand by the same ratio automatically
For constructing first and second order model in stage I and stage II. Currently require orthogonality for the second- order model to guarantee the consistency of gradient estimation. Select Appropriate DOE
Estimation Method in STRONG Given an appropriate design strategy and initial sample size for the center point • Intercept estimator , here represents the observation at the point , is determined by the algorithm. • Gradient and Hessian estimator Suppose we have n design points and the response values are , respectively. :Design Matrix , then
Decide the Moving Direction and Step • Definition (Subproblem) • Determine the new iterate solution is accepted or not if then the solution is rejected then the solution is accepted • Definition (Reduction Test) for stage I • Definition (Sufficient Reduction Test) for stage II
The local approximation model is poor The step size is too large Sampling error of observed response for and Three situations we cannot find a satisfactory solution
Shrink the trust region and sampling region Increase the replications of the center point Add more the design points Collect all the visited solutions within the trust region and increase the replication for each of them. Solutions
Initial solution Scaling problems Experimental designs Variance reduction techniques Timing to employ the “sufficient reduction” test Stopping rules Implementation Issues
Allow unequal variances Have the potential of solving high-dimensional problems with efficient DOE It is automated Local convergence property Advantages of STRONG
Computationally intensive if the problem is large-scaled Slow convergence speed if the variables are ill-scaled Limitations of STRONG
Preliminary Numerical Evaluation (I) • Rosenbrock test function • The minimal solution locates at (1,1) and the minimal value of objective function is 0 • Full factorial design for stage I and central composite design for stage II
The Performance of STRONG • Case 1 Initial solution is (30, -30) Variance of noise is 10 Sample size for each design point is 2
The performance of FDSA • Case 2 Initial solution: (30, -30) Variance of noise: 10 Bound of Parameter: (100, -100)
The performance of FDSA-with good starting solution • Case 3 Initial solution: (3,3) Variance of noise: 10 Bound of parameter: (0,5)
Large-Scale Problems Design of experiment Variance reduction technique Test Practical Problems Ill-Scale Problems Iteratively different shape of trust region Future Research
Questions? Thanks!
Hypothesis testing cannot yield sufficient reduction can yield sufficient reduction Type I error is required to satisfy Hypothesis Testing Scheme
Relevant Definitions in Sub_algorithm • Reject-solution Set denotes the reject-solution set which collects all the visited solutions up to in the sub_algorithm and • Simulation Allocation Rule (SAR) (Hong and Nelson, 2006) SAR guarantees that (additional observations allocated to x at iteration ) if x is a newly visited solution at iteration and for all visited solutions
Features of Sub_algorithm • Trust Region and Sampling Region keep shrinking • Sample size for center point is increasing • Design points are accumulated • The local model quality keeps improving • Being more conservative in optimization step size ( ) • Reduce the sampling error for each visited point in the set Intuitive explanation:
(Theorem 3.2.3) (Corollary 3) In the sub-algorithm, if (Theorem 3.2.4) For any initial point , the algorithm generates a sequence of iterates and Significant Theorems in STRONG
Some Problems with TR if it is applied in stochastic case • TR is developed for deterministic optimization are available • Bias in intercept and gradient estimation • Ratio • Inconsistent comparing basis Notice: In general,
General Propertiesof the Algorithm 1. a.s. 2. a.s. 3. If then therefore the algorithm won’t get stuck in a nonstationary point
Algorithm Assumptions • For • For the local approximation model
Literatures Review (I) • Stochastic Approximation • Robbins-Monro (1951) algorithm-gradient based • Kiefer-Wolfowitz (1952) algorithm-use finite-difference method as the gradient estimate The basic form of stochastic approximation is the finite-difference gradient estimate, • Strength: Under proper conditions • Weakness: • The gain sequence need to get tuned manually • Suffers from slow convergence in some (Andradottir, 1998) • When the objective function grows faster than quadratically, it will fail to converge. (Andradottir, 1998)
Response Surface Methodology Proposed by Box and Wilson (1951) A sequential experimental procedure to determine the best input combination so as to maximize the output or yield rate. Strength: A general procedure Power statistical tools such as design of experiment, regression analysis and ANOVA are all at its disposal (Fu, 1994) Weakness: No convergence guarantee Human involvements needed Literatures Review (II)
Other heuristic methods Genetic Algorithm Evolutionary Strategies Simulated Annealing Tabu Search Neld and Mead’s Simplex Search Strengths “Usually” can obtain a satisfactory solution Weakness No general convergence theory Literatures Review (III)