510 likes | 516 Views
This lecture discusses the concept of dual decomposition in discrete optimization, including its formulation and applications. The lecture also compares dual decomposition with tree-reweighted message passing methods.
E N D
Discrete OptimizationLecture 3 – Part 2 M. Pawan Kumar pawan.kumar@ecp.fr Slides available online http://cvn.ecp.fr/personnel/pawan/
Outline • Dual Decomposition
Dual Decomposition minx ∑igi(x) s.t.x C
Dual Decomposition minx,xi ∑igi(xi) s.t.xi C xi = x
Dual Decomposition minx,xi ∑igi(xi) s.t.xi C
Dual Decomposition maxλi minx,xi ∑igi(xi) + ∑iλiT(xi-x) s.t.xi C KKT Condition: ∑iλi= 0
Dual Decomposition maxλi minx,xi ∑igi(xi) + ∑iλiTxi s.t.xi C
Dual Decomposition maxλi minxi ∑i (gi(xi) + λiTxi) s.t.xi C Projected Supergradient Ascent Supergradients of h(z) at z0 h(z) - h(z0) ≤ sT(z-z0), for all z in the feasible region
Dual Decomposition maxλi minxi ∑i (gi(xi) + λiTxi) s.t.xi C Initialize λi0= 0
Dual Decomposition maxλi minxi ∑i (gi(xi) + λiTxi) s.t.xi C Compute supergradients si= argminxi ∑i (gi(xi) + (λit)Txi)
Dual Decomposition maxλi minxi ∑i (gi(xi) + λiTxi) s.t.xi C Project supergradients pi = si - ∑jsj/m where ‘m’ = number of subproblems (slaves)
Dual Decomposition maxλi minxi ∑i (gi(xi) + λiTxi) s.t.xi C Update dual variables λit+1= λit + ηtpi where ηt = learning rate = 1/(t+1) for example
Dual Decomposition Initialize λi0= 0 Compute projected supergradients si= argminxi ∑i (gi(xi) + (λit)Txi) pi = si - ∑jsj/m REPEAT Update dual variables λit+1= λit + ηtpi
Outline • Dual Decomposition • TRW vs. DD • DD for Energy Minimization • Extensions of DD
TRW 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc 6.5 6.5 7
TRW 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc f1(a) = 0 f1(b) = 0 f2(b) = 0 f2(c) = 0 f3(c) = 0 f3(a) = 0 Strong Tree Agreement
DD 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc ya;0 ya;1 yb;0 yb;1 yc;0 yc;1 1 0 1 0 - - Optimal LP solution Values of yab;ik not shown. But we know yab;ik = ya;iyb;k
Supergradients 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc sa;0 sa;1 sb;0 sb;1 sc;0 sc;1 1 0 1 0 - - - - 1 0 1 0 1 0 - - 1 0
Projected Supergradients 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc pa;0 pa;1 pb;0 pb;1 pc;0 pc;1 0 0 0 0 - - - - 0 0 0 0 0 0 - - 0 0
Objective 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc 6.5 6.5 7 No further increase in dual objective
DD 7.5 -7.5 8.75 8.75 -5 6 6 -3 7.5 l1 1 -5.5 -3 -1 -3 -7 l0 7 -7 6.5 -3 3 -3 6.5 3 7 Vb Vc Va Va Vb Vc 6.5 6.5 7 No further increase in dual objective Strong Tree Agreement implies DD stops
TRW 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc 4 0 4
TRW 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc f1(a) = 1 f1(b) = 1 f2(b) = 1 f2(c) = 0 f3(c) = 1 f3(a) = 1 f2(b) = 0 f2(c) = 1 Weak Tree Agreement
DD 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc ya;0 ya;1 yb;0 yb;1 yc;0 yc;1 0 1 0 1 - - Optimal LP solution Values of yab;ik not shown. But we know yab;ik = ya;iyb;k
Supergradients 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc sa;0 sa;1 sb;0 sb;1 sc;0 sc;1 0 1 0 1 - - - - 0 1 1 0 0 1 - - 0 1
Projected Supergradients 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc pa;0 pa;1 pb;0 pb;1 pc;0 pc;1 0 0 0 0 - - - - 0 0 0.5 -0.5 0 0 - - -0.5 0.5
Update with Learning Rate ηt = 1 4 -2 2 0 1 0 0 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0 8 -0.2 Vb Vc Va Va Vb Vc pa;0 pa;1 pb;0 pb;1 pc;0 pc;1 0 0 0 0 - - - - 0 0 0.5 -0.5 0 0 - - -0.5 0.5
Objective 4 -2 2 0 1 -0.5 0.5 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0.5 8 -0.7 Vb Vc Va Va Vb Vc -0.5 4 4.3 Decrease in dual objective
Supergradients 4 -2 2 0 1 -0.5 0.5 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0.5 8 -0.7 Vb Vc Va Va Vb Vc sa;0 sa;1 sb;0 sb;1 sc;0 sc;1 0 1 0 1 - - - - 1 0 0 1 0 1 - - 1 0
Projected Supergradients 4 -2 2 0 1 -0.5 0.5 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0.5 8 -0.7 Vb Vc Va Va Vb Vc pa;0 pa;1 pb;0 pb;1 pc;0 pc;1 0 0 -0.5 0.5 - - - - 0.5 -0.5 -0.5 0.5 0 0 - - 0.5 -0.5
Update with Learning Rate ηt = 1/2 4 -2 2 0 1 -0.5 0.5 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0 1 -1 2 0.5 8 -0.7 Vb Vc Va Va Vb Vc pa;0 pa;1 pb;0 pb;1 pc;0 pc;1 0 0 -0.5 0.5 - - - - 0.5 -0.5 -0.5 0.5 0 0 - - 0.5 -0.5
Updated Subproblems 4 -2 2.25 -0.25 1 -0.25 0.25 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0.25 1 -1 1.75 0.25 8 -0.45 Vb Vc Va Va Vb Vc
Objective 4 -2 2.25 -0.25 1 -0.25 0.25 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0.25 1 -1 1.75 0.25 8 -0.45 Vb Vc Va Va Vb Vc 0 4.25 4.25 Increase in dual objective DD goes beyond TRW
DD 4 -2 2.25 -0.25 1 -0.25 0.25 0 4 l1 0 -1 0 1 -1 0 l0 8 -2 0.25 1 -1 1.75 0.25 8 -0.45 Vb Vc Va Va Vb Vc 0 4.25 4.25 Increase in dual objective DD provides the optimal dual objective
Outline • Dual Decomposition • TRW vs. DD • DD for Energy Minimization • Extensions of DD
Dual Decomposition Initialize λi0= 0 Compute projected supergradients si= argminxi ∑i (gi(xi) + (λit)Txi) pi = si - ∑jsj/m REPEAT Update dual variables λit+1= λit + ηtpi
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 s1a = Slaves agree on label for Va 1 0 s4a =
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 0 0 s1a = p1a = 1 0 0 0 s4a = p4a =
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 s1a = Slaves disagree on label for Va 0 1 s4a =
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 0.5 -0.5 s1a = p1a = Unary cost increases 0 1 -0.5 0.5 s4a = p4a =
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 0.5 -0.5 s1a = p1a = Unary cost decreases 0 1 -0.5 0.5 s4a = p4a =
Dual Decomposition Komodakis et al., 2007 4 5 6 1 Va Vb Vc Vb Vc Va 2 Vd Ve Vf Ve Vf Vd 3 Vg Vh Vi Vh Vi Vg 1 0 0.5 -0.5 s1a = p1a = Push the slaves towards agreement 0 1 -0.5 0.5 s4a = p4a =
Outline • Dual Decomposition • TRW vs. DD • DD for Energy Minimization • Extensions of DD
Comparison TRW DD Fast Slow Local Maximum Global Maximum Requires MAP Estimate Requires Min-Marginals Also possible in the TRW framework Other forms of subproblems Tighter relaxations Sparse high-order potentials Easier in the DD framework
Subproblems Va Vb Vc Va Vb Vc Vd Ve Vf Vd Ve Vf Vg Vh Vi Vg Vh Vi Binary labeling problem Va Vb Vc Black edges submodular Vd Ve Vf Red edges supermodular Vg Vh Vi
Subproblems Va Vb Va Vb Vc Vd Ve Vf Vh Vi Vg Vh Vi Binary labeling problem Va Vb Vc Black edges submodular Vd Ve Vf Red edges supermodular Vg Vh Vi Remains submodular over iterations
Tighter Relaxations Va Vb Vb Vc Va Vb Vc Vd Ve Ve Vf Vd Ve Vf Vg Vh Vi Vd Ve Ve Vf Vg Vh Vh Vi Relaxation that is tight for the above 4-cycles LP-S + Cycle inequalities
High-Order Potentials Vb Vc Va Vb Vc Va Vb Ve Vf Vd Ve Vf Vd Ve Vg Vh Vi Vg Vh Vi Va Vd Ve Vf Vg Vh Vi
High-Order Potentials Vb Vc Ve Vf Value of Potential θc;y Labeling y for Clique O(h|C|)!! Subproblem: minyθc;y + λTy
Sparse High-Order Potentials Vb Vc Ve Vf Value of Potential θc;y Labeling y for Clique Σaya;0 = 0 Σaya;0 > 0 O(h|C|)!! Subproblem: minyθc;y + λTy