1 / 20

MS&E 348

MS&E 348. The L-Shaped Method: Theory and Example 2/5/04 Lecture. Review: Dantzig-Wolfe (“DW”) Decomposition. We recognize delayed column generation as the centerpiece of the decomposition algorithm

rufin
Download Presentation

MS&E 348

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MS&E 348 The L-Shaped Method: Theory and Example 2/5/04 Lecture

  2. Review: Dantzig-Wolfe (“DW”) Decomposition • We recognize delayed column generation as the centerpiece of the decomposition algorithm • Even though the master problem can have a huge number of columns, a column is generated only after it is found to have a negative reduced cost and is about to enter the basis • The subproblems are smaller LP problems that are employed as an economical search method for discovering columns with negative reduced costs • A variant can be used whereby all columns that have been generated in the past are retained

  3. Review: Benders Decomposition • Benders decomposition uses delayed constraint generation and the cutting plane method, and should be contrasted with DW that uses column generation • Benders is essentially the same as DW applied to the dual • Similarly as for DW, we have the option of discarding all or some of the constraints in the relaxed primal that have become inactive

  4. Review: Two-Stage Problemwith Fixed Recourse As defined by Dantzig (1955) and Beale (1955) min z = cTx+Ew[min q(w)Ty(w)] Scenario s.t. Ax=b T(w)x+Wy(w)=h(w) Technology & Recourse matrices y(w)≥ 0 x ≥ 0 In extensive form (“EF”) for a discrete finite set of scenarios min cTx+∑kpkqkTyk k=1,…,K scenarios s.t. Ax=b Tkx+Wyk=hk Technology & Recourse matrices yk ≥ 0 x ≥ 0

  5. Why the L-Shaped Method? Block structure of the 2-stage extensive form A T1 W T2 Hence the name: L-shaped method! W … … TK W … AT T1 T2 TK Block structure of the 2-stage dual … WT WT WT

  6. Connection betweenBenders/DW and the L-Shaped Method • Given this block structure, it seems natural to exploit the dual structure by performing a DW (1960) decomposition (inner linearization) of the dual • … or a Benders (1962) decomposition (outer linearization) of the primal • The method has been extended in stochastic programming to take care of feasibility questions and is known as Van Slyke and Wets’ (1969) L-shaped method • It is a cutting plane technique

  7. Comments on the L-Shaped Method • The method consists of solving an approximation of the recourse function • by using an outer linearization of Q. Two types of constraints are sequentially • added: • (i) feasibility cuts determining { x | Q(x) < +∞} and • (ii) optimality cuts that are linear approximations to Q on its • domain of finiteness - Observe that solving P1 is equivalent to solving P2 P1: P2: min cTx + Q(x) min cTx + Ө s.t. x Є K1 ∩ K2 Q(x) ≤ Ө K1 = { x | Ax=b, x≥0 } s.t. x Є K1 ∩ K2 K2 = { x | Q(x) < ∞ }

  8. The L-Shaped Algorithm Step 0. Set r = s = v = 0 (these are indices initially set at zero) Step 1. Set v = v+1 (v is the iteration index) and solve the LP (1) - (3) min z = cTx + Ө (1) s.t. Ax=b Dlx ≥ dl (2) l=1,…,r feasibility cuts Gsx + Ө≥ gs (3) l=1,…,s optimality cuts x ≥ 0 and Ө is a free real number Let (xv, Өv) be an optimal solution. Detail for Initialization: if no constraint (3) is present, Өv is set equal to -∞ and is not considered in the computation of xv.

  9. The L-Shaped Algorithm (Cont’d) Step 2. For k = 1, …, K solve the LP (Phase I Simplex) min wk’ = eTu+ + eTu- (4) ∆!: these are obviously ≠ s.t. W y + Iu+- I u- = hk - Tk xv Identity matrix (5) y ≥ 0, u+ ≥ 0, u- ≥ 0 where eT = (1, … ,1) until for some k, the optimal value wk’ > 0 In this case, let σv be the associated simplex multipliers and define (6) Dr+1= (σv)T Tk and (7) dr+1 = (σv)T hk to generate a feasibility cut of type (2). Set r = r + 1, add to the constraint set (2), and return to Step 1. If for all k, wk’ = 0, go to step 3

  10. The L-Shaped Algorithm (Cont’d) Step 3. For k = 1, …, K solve the LP min wk = qkTy (8) s.t. W y = hk - Tk xv y ≥ 0 Let Πkv be the simplex multipliers associated with the optimal solution of problem k of type (8). Define Gs+1= ∑kpk (Πkv)T Tk (9) and gs+1= ∑kpk (Πkv)T hk (10) Let wv = gs+1 – Gs+1xv If Өv≥ wv, stop as xv is an optimal solution. Otherwise, set s = s + 1, add to the constraint set (3) and return to Step 1.

  11. Example min z = 0 + Q(x,ξk) (we assume cT = 0) ξ - x if x ≤ξ where Q(x,ξ) = x - ξ if x ≥ξ We also assume 0 ≤ x ≤ 10 1 w.p. 1/3 and ξ = 2 w.p. 1/3 4 w.p. 1/3 Solve this example : (i) directly (ii) by using the L-shaped algorithm

  12. Direct Solution Q(x,ξk) ξ1=1 ξ2=2 ξ3=4 x w.p. 1/3 1/3 1/3 Q(x) = ∑pkQ(x,ξk) can be constructed directly… 7/3 |Slope| = 1 5/3 4/3 1 1 2 4 x

  13. Using the L-Shaped Algorithm • There are no feasibility cuts as K1 = R (there is no Ax=b constraint) • and K2 = R (Q(x) is defined everywhere) • We can reformulate this problem to fit the notation previously used • Each subproblem can be formulated as: min y1 s.t. y1 – y2 = ξ - x y1 – y3 = - ξ + x which is equivalent to… min wk = qkTy ξ -ξ 1 -1 1 -1 0 1 0 -1 , hk= , Tk= s.t. W y = hk - Tk xv with W = y ≥ 0

  14. Iteration by Iteration Iteration 1 We solve min Ө s.t. 0 ≤ x ≤ 10 Since at initialization, we assume Ө1 = -∞ Since x is undetermined, we assume we start at x1=0 There’s no feasibility issue, we go to Step 3 of the algorithm Simplex multipliers are the same for all three scenarios… We find G1 = ∑k⅓ [1 0] = 1 1 -1 g1 = ∑k⅓ [1 0] = 7/3 ξk -ξk w1 = g1 – G1x1 = 1 and w1 > Ө1 and(x1 = 0, Ө1 = -∞) is not optimal We add the cut of type (3): Ө≥ 7/3 - x

  15. Graphically… Q(x) 7/3 5/3 4/3 1 1 2 4 x First Optimality Cut

  16. Iteration by Iteration (Cont’d) Iteration 2 We solve min Ө s.t. Ө≥ 7/3 – x 0 ≤ x ≤ 10 We find (x2 = 10, Ө2 = -23/3) There’s no feasibility issue, we go to Step 3 of the algorithm Simplex multipliers are the same for all three scenarios… We find G2 = ∑k⅓ [0 1] = -1 1 -1 g2 = ∑k⅓ [0 1] = -7/3 ξk -ξk w2 = g2 – G2x2 = 23/3 and w2 > Ө2 and(x2 = 10, Ө2 = -23/3) is not optimal We add the optimality cut: Ө≥ x – 7/3

  17. Graphically… Q(x) 7/3 5/3 Second Optimality Cut 4/3 1 1 2 4 x First Optimality Cut

  18. Iteration by Iteration (Cont’d) Iteration 3 We solve min Ө s.t. Ө≥ 7/3 – x Ө≥ x - 7/3 0 ≤ x ≤ 10 Simplex multipliers are NOT the same for all three scenarios… We find (x3 = 7/3, Ө3 = 0) We find G3 = ⅓ [0 1] + ⅓ [0 1] + ⅓ [1 0] = -1/3 1 -1 1 -1 1 -1 g3 = 1/3 w3 = g3 – G3x3 = 10/9 and w3 > Ө3 and(x3 = 7/3, Ө3 = 0) is not optimal We add the optimality cut: Ө≥ (x+1)/3

  19. Iteration by Iteration (Cont’d) Iteration 4 We find (x4 = 3/2, Ө4 = 5/6) which is not optimal We add the optimality cut: Ө≥ (5-x)/3 Iteration 5 We find (x5 = 2, Ө5 = 1) which is OPTIMAL

  20. Graphically… (x*=2, Ө5=1) OPTIMAL OC 2 Q(x) 7/3 OC 3 5/3 4/3 1 Optimality Cut (“OC”) OC 4 1 2 4 x OC 1

More Related