360 likes | 499 Views
Job Release-Time Design in Stochastic Manufacturing Systems Using Perturbation Analysis. By : Dongping Song Supervisors : Dr. C.Hicks & Dr. C.F.Earl Department of MMM Engineering University of Newcastle upon Tyne March, 2000. Overview. 1. Introduction 2. Problem formulation
E N D
Job Release-Time Design in Stochastic Manufacturing SystemsUsing Perturbation Analysis By: Dongping Song Supervisors: Dr. C.Hicks&Dr. C.F.Earl Department of MMM Engineering University of Newcastle upon Tyne March, 2000
Overview 1. Introduction 2. Problem formulation 3. Perturbation analysis (PA) 4. PA algorithm 5. Numerical examples 6. Conclusions
Introduction -- a real example Number of jobs = 113; Number of resources=13.
Introduction -- a simple structure product WIP component component
Introduction -- job release times • Si -- job release times • Result in waiting time if {Si } is not well designed.
Introduction -- backwards scheduling Not good if uncertain processing times or finite resource capacity.
distribution of completion time tardy probability Introduction -- uncertainty problem Processing times follow probability distributions.
Introduction -- resource problem Job 2 and job 3 use the same resourceÞ job 2 is delayed, job 1 is delayedÞ resulting in waiting times and tardiness.
Problem formulation • Find optimal S=(S1, S2, …, Sn) to minimise expected total cost: • J(S) = E{S(WIP holding costs • + product earliness costs • + product tardiness costs)} • Key step of stochastic approximation is: • ¶J(S)/¶Si = ?
Perturbation analysis -- references • Ho,Y.C. and Cao, X.R., 1991, Perturbation Analysis of Discrete Event Dynamic Systems, Kluwer. • Glasserman,P., 1991, Gradient Estimation Via Perturbation Analysis, Kluwer. • Cassandras,C.G. 1993, Discrete Event Systems: Modeling and Performance Analysis, Aksen.
Perturbation analysis-- general problem • Consider to minimise: J(q) = EL(q,w) J(.)-- system performance index. L(.)-- sample performance function. q -- a vector of n real parameters. w -- a realization of the set of random sequences. • PA aims to find an unbiased estimator ofgradient -- ¶J(q)/¶qi, with as little computation as possible.
Calculate ¶L(q,w)/¶qi , i = 1, 2, …, n sample function gradient Perturbation analysis -- main idea • Based on a single sample realization • Using theoretical analysis • Exchange E and ¶: • ? E¶L(q,w)/¶qi= ¶EL(q,w)/¶qi • = ¶J(q)/¶qi
PA algorithm -- concepts • Sample realization for {Si}-- nominal path (NP) • Sample realization for {Si+D, Sj,j¹i} -- perturbed path (PP), where D is sufficiently small. • All perturbed paths are theoretically constructed from NP rather than from new experiments
PA algorithm -- Perturbation rules • Perturbation generation rule -- When PP starts to deviate from NP ? • Perturbation propagation rule -- How the perturbation of one job affects the processing of other jobs? • -- along the critical paths • -- along the critical resources • Perturbation disappearance rule -- When PP and NP overlaps again ?
perturbation generation perturbation disappearance PA algorithm -- Perturbation rules • If S2 is perturbed to be S2+ D. • Cost changes due to the perturbation.
perturbation generation perturbation propagation PA algorithm -- Perturbation rules • If S3 is perturbed to be S3+ D. • Cost changes due to the perturbation.
PA algorithm -- gradient estimate • From PP and NP to calculate sample function gradient :¶L(S,w)/¶Si • -- usually can be expressed by indicator functions. • Unbiasedness of gradient estimator: • E¶L(S,w)/¶Si = ¶J(S)/¶Si • Condition: processing times are independent • continuous random variables.
Stochastic approximation • Iteration equation: qk+1 = qk+1 + gk×ÑJk step sizegradient estimator of ÑJ • Robbins-Monro (RM) algorithm: if EÑJk = ÑJ. • Kiefer-Wolfowitz (KW) algorithm: if ÑJk is finite difference estimate. • RM is faster than KW (Fu and Hu, 1997).
Time comparison for gradient estimate • Finite difference estimator of gradient: • PA estimator of gradient -- where w1, w2, …, wK is a sequence of sample processes.
Time comparison for gradient estimate • Time needed to obtain gradient estimator with K=1000. time (second) simulation method PA method number of job
Example 1-- two stage uniform distribution • Two stage serial system with uniform distributions • Compare with theoretical results (Yano, 1987)
Example 1-- two stage uniform distribution • Convergence of planned parameters (S1 , S2) S2 (6.96, 8.44) S1
Example 2-- two stage exponential distribution • Two stage serial system with exponential distributions • Compare with theoretical results (Yano, 1987)
Example 2-- two stage exponential distribution • Convergence of planned parameters (S1 , S2) S2 (7.22, 8.42) S1
Example 3 -- multi-stage system • Assume: Normal distribution for processing times; • Infinity capacity model. • Product structure:
Convergence of cost in PA+SA J(S) iteration number
The maximum gradient in PA+SA iteration number (+/-) max {|¶J(S)/¶Si |, i=1,…, n}
Compare with simulated annealing Compare the convergence of cost over time (second). J(S) Where simulated annealing uses four different settings (initial step sizes and number for check equilibrium) simulated annealing time(second) PA+SA method
Example 4-- complex system • Assume: Normal distribution and finite capacity model.
Resource constraints Resources Job sequences 1000:247, 243, 239, 234, 231, 246, 242, 238, 230, 245, 237, 229, 228. 1211:236:1, 236:2, 236:3, 236:4, 236:5, 236:6, 236:7, 226:1, 236:8, 226:2, 226:3, 226:4, 226:5, 226:6, 236:11, 226:7, 232:1, 226:8, 235:1, 232:2, 236:12, 235:2, 226:9, 232:3, 235:3, 240:1, 235:4, 240:2, 226:10, 232:5, 236:13, 233:2, 235:5, 240:3, 233:3, 235:6, 240:4, 232:7, 226:11, 233:4, 235:7, 240:5, 232:8, 233:5, 235:8, 240:6, 232:9, 233:6, 240:7, 226:12, 232:10, 235:9, 240:8, 233:8, 240:9, 233:9, 226:13, 235:10, 240:10, 236:15, 226:14, 240:11, 236:16, 226:15. 1212:236:9, 236:10, 232:4, 232:6, 236:14, 232:11, 232:12. 1511:233:1, 233:7, 233:11.
Resource constraints Resources Job sequences 1129:233:10. 1224:233:12. 1222:244:1, 244:3, 244:5, 241:1, 241:2, 241:3, 248:2, 248:3, 248:5, 248:6. 1113:244:2, 241:4, 241:5, 248:4. 1115:241:6, 241:7. 1315:244:4. 1226:244:6, 244:7. 1125:244:8, 248:7, 248:8. 1411:244:9, 248:1. Total number of jobs: 113; Number of resources: 13.
Convergence of cost in PA+SA J(S) 784.9 120.7 iteration number
The maximum gradient in PA+SA iteration number (+/-) max {|¶J(S)/¶Si |, i=1,…, n}
Compare with simulated annealing Compare the convergence of cost over time (minute). J(S) with four different settings simulated annealing time(minute) PA+SA method
Conclusions • Effective algorithm to design job release times. • Can deal with complex systems beyond the ability of analytical methods. • Faster to obtain gradient estimator than simulation method • Faster than simulated annealing to optimise parameters • Not depend on particular distributions and can include other stochastic factors.
Further Work • Convexity of the cost function and global optimization problem • The effect of different job sequences on job release time design • Further compare with other optimisation methods