470 likes | 665 Views
Stochastic Linear Programming by Series of Monte-Carlo Estimators. Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail: <sakal;@ktl.mii.lt>. CONTENT. Introduction Monte-Carlo estimators Stochastic differentiation Dual solution approach ( DS )
E N D
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania E-mail: <sakal;@ktl.mii.lt>
CONTENT • Introduction • Monte-Carlo estimators • Stochastic differentiation • Dual solution approach (DS) • Finite difference approach (FD) • Simulated perturbation stochastic approximation (SPSA) • Likelihood ratio approach (LR) • Numerical study of stochastic gradient estimators • Stochastic optimization by series of Monte-Carlo estimators • Numerical study of stochastic optimization algorithm • Conclusions
Introduction We consider the stochastic approach for stochastic linear problems which distinguishes by • adaptive regulation of the Monte-Carlo estimators • statistical termination procedure • stochastic ε–feasible direction approach to avoid “jamming” or “zigzagging” in solving a constraint problem
Two-stage stochastic programming problem with recourse where subject to the feasible set W, T, h are random in general and defined by absolutely continuous probability density
Monte-Carlo estimators of objective function Let the certain number N of scenarios for some is provided: and the sampling estimator of the objective function as well as the sampling variance are computed
Monte-Carlo estimators of stochastic gradient The gradient as well as the sampling covariance matrix: are evaluated using the same random sample, where
Statistical testing of optimality hypothesis under asymptotic normality Optimality hypothesis is rejected if 1) the statistical hypothesis of equality of gradient to zero is rejected 2) or confidence interval of the objective function exceeds the admissible value
Stochastic differentiation • We examine several estimators for stochastic gradient: • Dual solution approach (DS); • Finite difference approach (FD); • Simulated perturbation stochastic approach (SPSA); • Likelihood ratio approach (LR).
Dual solution approach (DS) The stochastic gradient is expressed as using the set of solutions of the dual problem
Finite difference (FD) approach In this approach the each ith component of the stochastic gradient is computed as: is the vector with zero components except ith one, equal to 1, is certain small value
Simulated perturbation stochastic approximation (SPSA) where is the random vector, which components obtain values 1 or -1 with probabilities p=0.5, is some small value (Spall (2003))
Likelihood ratio (LR) approach Rubinstein, Shapiro (1993), Sakalauskas (2002)
Methods for stochastic differentiation have been explored with testing functions here Numerical study of stochastic gradient estimators (1)
Numerical study of stochastic gradient estimators (2) Stochastic gradient estimators from samples of size (number of scenarios) N was computed at the known optimum point X (i.e. ) for test functions, depending on n parameters. This repeated 400 times and the corresponding sample of Hotelling statistics was analyzed according to and criteria
criteria on variable number nand Monte Carlo sample sizeN (critical value 0,46)
criteria on variable number nand Monte Carlo sample sizeN (critical value 2,49)
Statistical criteria on Monte Carlo sample size N for number of variable n=40(critical values 0,46 ir2,49)
Statistical criteria on Monte Carlo sample size N for number of variable n=60 (critical values 0,46 ir2,49)
Statistical criteria on Monte Carlo sample size N for number of variable n=80(critical values 0,46 ir2,49)
Conclusion: T2-statistics distribution may be approximated by Fisher law, when number of scenarios: Numerical study of stochastic gradient estimators (8)
Frequency of optimality hypothesis on the distance to optimum (n=2)
Frequency of optimality hypothesis on the distance to optimum (n=10)
Frequency of optimality hypothesis on the distance to optimum (n=20)
Frequency of optimality hypothesis on the distance to optimum (n=50)
Frequency of optimality hypothesis on the distance to optimum (n=100)
Numerical study of stochastic gradient estimators (14) Conclusion: stochastic differentiation by Dual Solution and Finite Difference approaches enables us to reliably estimate the stochastic gradient, when: . SPSA and Likelihood Ratio works when
Gradient search procedure Let some initial point be chosen, the random sample of a certain initial size N0 be generated at this point, and Monte-Carlo estimators be computed. The iterative stochastic procedure of gradient search is: where the projection of to ε - feasible set:
The rule to choose number of scenarios We propose a following rule to regulate number of scenarios: Thus, the iterative stochastic search is performed until statistical criteria don’t contradict to optimality conditions
Linear convergence Under some conditions on finiteness and smooth differentiability of the objective function the proposed algorithm converges a.s. to the stationary point: with linear rate where K, L, C, l are some constants (Sakalauskas (2002), (2004))
Linear Convergence Since the Monte-Carlo sample size increases with geometric progression rate it follows: Conclusion: the approach proposed enables us to solve SP problems by computing a finite number times of expected objective function
Numerical study of stochastic optimization algorithm Test problems have been solved from the Data Base of two-stage stochastic linear optimisation problems: http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/. Dimensionality of the tasks from n=20 to n=80 (30 to 120 at the second stage) All solutions given in data base are achieved and in a number of that we succeeded to improve the known decisions, especially for large number of variables
Two stage stochastic programming problem (n=20) • The estimate of the optimal value of the objective function given in the database is 182.94234 0.066 (improved to 182.59248 0.033 ) • N0=Nmin=100, Nmax=10000 • Maximal number of iterations , generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value . • Initial data as follows: • Solution repeated 500 times
Frequency of stopping under number of iterations and admissible confidence interval
Change of the objective function under number of iterations and admissible interval
Change of confidence interval under number of iterations and admissible interval
Change of the Hotelling statistics under admissible interval
Change of the Monte-Carlo sample size under number of iterations and admissible interval
Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 649.604 0.053. Solution by developed algorithm: 646.444 0.999. Solving DB Test Problems (1)
Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 6656.637 0.814. Solution by developed algorithm: 6648.548 0.999. Solving DB Test Problems (2)
Two-stage SP problem first stage: 80 variables, 40 constraints second stage: 80 variables, 120 constraints DB given solution 586.329 0.327. Solution by developed algorithm: 475.012 0.999. Solving DB Test Problems (3)
Conclusions • The stochastic iterative method has been developed to solve the SLP problems by a finite sequence of Monte-Carlo sampling estimators • The approach presented is reasoned by the statistical termination procedure and the adaptive regulation of size of Monte-Carlo samples • The computation results show the approach developed provides estimators for a reliable solving and testing of optimality hypothesis in a wide range of dimensionality of SLP problems (2<n<100). • The approach developed enables us generate almost unbounded number of scenarios and solve SLP problems with admissible accuracy • Total volume of computations solving SLP exceeds only several times the volume of scenarios needed to evaluate one value of the expected objective function
References • Rubinstein, R, and Shapiro, A. (1993). Discrete events systems: sensitivity analysis and stochastic optimization by the score function method. Wiley & Sons, N.Y. • Shapiro, A., and Homem-de-Mello, T. (1998). A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming, 81, pp. 301-325. • Sakalauskas, L. (2002). Nonlinear stochastic programming by Monte-Carlo estimators. European Journal on Operational Research, 137, 558-573. • Spall G. (2003) Simultaneous Perturbation Stochastic Approximation. J.Wiley&Sons • Sakalauskas, L. (2004). Application of the Monte-Carlo method to nonlinear stochastic optimization with linear constraints. Informatica, 15(2), 271-282. • Sakalauskas L. (2006) Towards implementable nonlinear stochastic programming. In Eds K.Marti et al. Coping with uncertainty, Springer Verlag
Announcements Welcome to the EURO Mini Conference “Continuous Optimization and Knowledge Based Technologies (EUROPT-2008)” May 20-23, 2008, Neringa, Lithuania http://www.mii.lt/europt-2008