1 / 63

The Traveling Salesman Problem in Theory & Practice

The Traveling Salesman Problem in Theory & Practice. Lecture 5: Tour Construction Heuristics 18 February 2014 David S. Johnson dstiflerj@gmail.com http:// davidsjohnson.net Seeley Mudd 523, Tuesdays and Fridays. Outline. Tour Construction Heuristics Definitions and Worst-Case Results

gene
Download Presentation

The Traveling Salesman Problem in Theory & Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Traveling Salesman Problem in Theory & Practice Lecture 5: Tour Construction Heuristics 18 February 2014 David S. Johnson dstiflerj@gmail.com http://davidsjohnson.net Seeley Mudd 523, Tuesdays and Fridays

  2. Outline Tour Construction Heuristics • Definitions and Worst-Case Results • NN, Greedy, Savings, Insertion Algorithms, etc. • Performance in Practice (Tour quality)

  3. Nearest Neighbor (NN): Start with some city. Repeatedly go next to the nearest unvisited neighbor of the last city added. When all cities have been added, go from the last back to the first.

  4. Theorem[Rosenkrantz, Stearns, & Lewis, “An analysis of several heuristics for the traveling salesman problem,” SIAM J. Comput. 6 (1977), 563-581]: • There is a constant c, such that, for any instance Iobeying the ∆-inequality, NN(I) ≤ clog(N)Opt(I). • There exists a constant c’, such that for all N > 3, there are N-city instances I obeying the triangle inequality for which we have NN(I) > c’log(N)Opt(I). Definition: For any algorithm Aand instance size N, the worst-case ratio for A is RN(A) = max{A(I)/OPT(I): I is an N-city instance} Corollary:RN(NN) = Θ(log(N)). [Proved last week] [To be proved today]

  5. Lower Bound Examples(Unspecified distances determined by shortest paths) 1 Number N1 of vertices = 3 OPT(F1) = 3+ε NN-path(F1) = 2 F1: 1+ε 1 NN path starts at left, ends in middle 2 2 1 1 F2: 1+ε 1+ε 1+ε 1+ε 1 1 1 1 1+ε N2 = 2N1 + 3 = 9 OPT(F1) = N2+5ε = 9+5ε NN-path(F2) = 10

  6. Fk-1 without shortcut between left and right endpoints 2k-1 2k-1 Fk: 1+ε 1+ε 1 1 1+ε Number Nk of vertices = 2*Nk-1 + 3 OPT(Fk) = Nk + ½(Nk+ 1)ε (for k ≥ 2) NN-path(Fk) = 2NN-path(Fk-1)+2k+2

  7. General Formula for Nk • Nk = 3(2k-1) • Proof by Induction: • Initialization: N2 = 9 = 3(22-1) • Assume true for k’ < k. Then • Nk = 2Nk-1 + 3 = 2(3(2k-1-1)) + 3 • = 32k – 6 +3 = 3(2k-1) Note for future reference: Nk is always odd.

  8. General Formula for Opt(Fk)when k ≥ 2 Opt(Fk) = Nk + ½(Nk+ 1)ε Proof: All edge lengths must be 1 or 1+ε, which means they must lie along the base of Fk, where the edges start with 1+ε and then alternate, and since Nk is odd, this means that means we will get one more 1+ε than 1. Note that this means that, as ε→ 0 and k→ ∞,OPT(Fk) → Nk= 3(2k-1), and hence log(OPT(Fk)) → k+1 + log(3/2), and hence for sufficiently large k we have k+1 > ½log(Opt(Fk))

  9. General Formula for NN-path(Fk) • NN-path(Fk) = (k+1)2k – 2 • Proof by Induction: • Initialization: NN-path(F2) = 10 = 322 - 2 • Assume true for k’ < k. Then • NN-path(Fk) = 2NN-path(Fk-1) + 2k + 2 • = 2(k2k-1– 2) + 2k + 2 = (k+1)2k – 2. Corollary:as ε→ 0 and k→ ∞, NN-path(Fk)/OPT(Fk) → (k+1)2k/(32k) > log(OPT(Fk))/6

  10. Fk-1 without shortcut between left and right endpoints 2k-1 2k-1 Fk: 1+ε 1+ε 1 1 1+ε Number Nk of vertices = 2*Nk-1 + 3 OPT(Fk) = Nk + ½(Nk+ 1)ε (for k ≥ 2) NN-path(Fk) = 2NN-path(Fk-1)+2k+2 But are these two vertices really nearest neighbors?

  11. 2k-1 2k-1 Fk-1 Fk 1+ε 1+ε 1 1 2k-2 2k-2 Fk-1 B C 1+ε 1+ε 1 1 Is d(B,C) + 1+ε > 2k-1

  12. Results for General k 2k-1 2k-1 Fk B A C 1+ε 1+ε 1 1 Define: D1(Fk) = d(A,C), D2(Fk) = d(B,C) = d(A,B) We shall prove by induction that D1(Fk) > 2k+1 - 3 and D2(Fk) > 2k - 1 Note: A shortest path from A to B need never go leftwards (by the triangle inequality)

  13. Assuming Induction Hypothesis (for k’ < k): D1(Fk’) > 2k’+1 - 3 D2(Fk’) > 2k’ – 1 There are two choices for the shortest A-C path in Fk 2k-1 2k-1 B A 1+ε 1+ε 1 1 2k-1 2k-1 C B A C 1+ε 1+ε 1 1 Consequently, D1(Fk) = D1(Fk-1) + 1+ε+ 2k-1 + D2(Fk-1) Similarly, D2(Fk) = 2+ε+ D1(Fk-1)

  14. Basis for the induction: F2 2 2 1 1 1+ε 1+ε 1+ε 1+ε 1 1 1 1 1+ε D1(F2) = 5 + ε D2(F2) = 3 + ε D1(Fk) = D1(Fk-1) + 1+ε + 2k-1 + D2(Fk-1) D2(Fk) = 2+ε

  15. Let us ignore the precise values of the  terms So the correct NN choice is the 2k-1 edge, as claimed.

  16. NN Running Time • To find the kth vertex, k > 1, find the shortest distance among N-k-1 candidates. • Total time = Θ( ) = Θ(N2)

  17. Greedy (Multi-Fragment) (GR): Sort the edges, shortest first, and treat them in that order. Start with an empty graph. While the current graph is not a TSP tour, attempt to add the next shortest edge. If it yields a vertex degree exceeding 2 or a tour of length less than N, delete.

  18. Theorem [Ong & Moore, “Worst-case analysis of two travelling salesman heuristics,” Inform. Proc. Letters 2 (1984), 273-277]: • For all instances I obeying the Δ-Inequality, GR(I)/OPT(I) = O(logN). • Theorem [A. M. Frieze, Worst-case analysis of algorithms for travelling salesman problems,” Methods of Operations Research 32 (1979), 93-112.]: • There are N-city instances IN for arbitrarily large N that obey theΔ-Inequality and have GR(IN)/OPT(IN) = Ω(logN/loglogN).

  19. Greedy Running Time • Initial sort takes Θ(N2log(N)). • Processing an edge takes constant time to check for degree 3, and two union-find operations to check for short cycles, for a total of at most Θ(N2α(N)).* • Total time = Θ(N2log(N)). *Note: The union-find operations can be avoided by storing with each vertex u its degree in the current forest (initially 0) and, if the degree is 1, the identity of the vertex v at the other end of the path containing it, which rules out edge {u,v} as a potential choice of next edge. This information is easily updated in constant time whenever we add an edge.

  20. Clarke-Wright “Savings” Heuristic • Start with a pseudo-tour in which an arbitrarily chosen city is the “hub” and the salesman returns to the hub after visiting each city (a multigraph in which every hub city is connected by two edges to the hub). • For each pair of non-hub cities, let the “savings” be the amount by which the pseudo-tour would be shortened if we added an edge between the two and deleted one edge to the hub from each. • As long as we do not yet have a tour, find a pair of non-hub cities that have not yet undergone two such shortcuts, and yields the most savings, and perform the shortcut for them. ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂

  21. Clarke-Wright “Savings” Heuristic Theorem [A. M. Frieze, Worst-case analysis of algorithms for travelling salesman problems,” Methods of Operations Research 32 (1979), 93-112]: • There are N-city instances IN for arbitrarily large N that obey the-Inequality and have Savings(IN)/OPT(IN) = Ω(logN/loglogN). No upper bounds are known under the triangle inequality. We do have RN(A) = Θ(log(N)) for a sequential variant on the Savings heuristic, in which we • Pick an initial non-hub vertex v as the “current” vertex. • Perform the best legal shortcut involving v and some other non-hub vertex that has not yet been involved in a shortcut. • Declare the other non-hub vertex involved in the shortcut to be the new “current” vertex. • Repeat until all non-hub vertices have been involved in a shortcut. [Ong & Moore, 1984] [B. Golden, “Evaluating a sequential vehicle routing algorithm,” AIIE Transactions 9 (1977), 204-208]

  22. Savings Heuristic Running Time • Mirrors Greedy: • Sort all (N-1)(N-2)/2 potential shortcuts by decreasing savings: Θ(N2log(N)). • Processing a shortcut takes constant time for involvement in more than two shortcuts, and two union-find operations to check for short cycles, for a total of at most Θ(N2α(N)).* • Total time = Θ(N2log(N)). • Sequential version mirrors NN analogously. • Total time = Θ(N2). *See Footnote for Greedy running time – same observations apply.

  23. Nearest Addition(NA): Start with 2-city tour consisting of some city and its nearest neighbor. Repeatedly insert the non-tour city uthat is closest to a tour city v into one of the tour edges involving v (the one that yields the shorter tour).

  24. Insertion Variants Nearest Insertion (NI): • Start with 2-city tour consisting of some city and its nearest neighbor. • Repeatedly insert the non-tour city u that is closest to a tour city v into the tour edge that yields the shortest new tour (not necessarily one involving v). Cheapest Insertion (CI): • Start with 2-city tour consisting of some city and its nearest neighbor. • Repeatedly perform the insertion of a non-tour vertex w into a tour edge {u,v} that results in the shortest new tour.

  25. More Insertion Variants Farthest Insertion (FI): • Start with 2-city tour consisting of the two cities at maximum distance from each other. • Repeatedly insert the non-tour city u, whose minimum distance to a tour city is maximum, into the tour edge that yields the shortest new tour. Arbitrary Insertion (AI): • Start with some 2-city tour. • Repeatedly pick some non-tour vertex u and insert it into the tour edge {u,v} that yields the shortest new tour.

  26. Farthest Insertion

  27. Theorem[Rosenkrantz, Stearns, & Lewis, 1977]: If instance I obeys the triangle inequality, then • RN(AI) = O(log(N)). • RN(NA) = RN(NI) = RN(CI) = 2. The best lower bound known for RN(AI) is Ω(log(N)/loglog(N)) [Y. Azar, “Lower bounds for insertion methods for TSP,”Combinatorics, Probability & Computing 3 (1994) 285-292]. The best lower bound known for RN(FI)is RN(FI) ≥ 6.5. [C. A. J. Hurkens, “Nasty TSP instances for farthest insertion,” IPCO Proc. (1992), 346-352].

  28. Upper Bound Proofs • The proof that RN(AI) = O(log(N)) mirrors that for NN. • For the RN(A) ≤ 2 bounds, the results for NA and NI are straightforward: • Note that the vertices are added to the tour in the same order that they are added to the tree in Prim’s algorithm for computing an MST, and the optimal tour is at least as long as the MST. • Thus we need only show that the increase in tour length when a vertex u is added under NA is no more than twice the increase in tree length in Prim’s algorithm, or 2d(u,v), where v is u’s nearest neighbor in the tour. The result for NI will follow since its cost at each step never exceeds NA’s cost. w d(u,w) ≤ d(u,v) + d(v,w) by -inequality, so d(u,w) + d(u,v) – d(v,w) ≤ 2d(u,v) u v

  29. Upper Bound for CI • Basic plan: We will associate each insertion with a unique edge of the MST, with the cost of the insertion being no more than twice the cost of the edge. • Label the vertices v1, v2, …, vN in the order by which they are added to the tree under CI. Let T be the MST and let Tibe the tree just before vi is added. • Say vj is compatible with vi if j < i and if vk is an internal vertex on the unique path in T between vjand vi, then k > i. (In other words, vkis not in Ti). • For each i > 1, the “critical vertex” for viis the compatible vertex for viwith largest index, and the ”critical edge” for vi is the first edge on the path to vi from its critical vertex.

  30. Critical Vertices and Edges 1 7 4 9 8 3 2 10 6 5 Note: In this example, every edge is critical for precisely one vertex. We can prove that this will always be the case.

  31. Unique Criticality vk Suppose an edge {vi,vj}, i < j, is critical for two distinct vertices, vh and vk, with h < k. Since viis the critical vertex for vh, we have that h >i and that vjand all the internal vertices in the path from vi to vh must exceed h. (We cannot have h = j since that would imply j < k and so vi would not be compatible with vk.) Since {vi,vj} is critical for vk, all the internal vertices of the path from vito vkmust exceed k > h. (In the case when k = j there are no internal vertices.) Hence, all the internal vertices on the (blue) path from vh to vk must have indices exceeding h, and so vh is compatible with vk, with index h > i, implying that vi is NOT the critical vertex for vk and {vi,vj} is not the critical edge, a contradiction. vh vj vi

  32. Completing the CI proof We first show that the cost of inserting vertex vk is no more than twice the cost of its critical edge {vi,vj}, i < j, for vk, 2 ≤ j ≤ N. • Let the critical edge be {vi,vj}, i < j. By definition of critical edge we must have j ≥ k. • Thus, at the time vk is added, viwas in the tour but vj was not. • Hence d(vi,vj) must be at least as large as the length of the shortest edge joining a tour vertex u to a non-tour vertex v. • By our NA argument, the insertion of v into a tour edge incident on u will cost at most 2d(u,v)≤ 2d(vi,vj). The cheapest insertion can cost no more. Summing over all inserted vertices we get that the total tour length for CI is at most two times the sum of the lengths of all the critical edges, which by the uniqueness lemma is simply the length of the MST. QED

  33. Lower Bound Examples 2-ε 2-ε 2-ε 2-ε Adjacent points have distance 1, all other distances are the distance along the line minus ε. 2-ε 2-ε 2-ε NA(I) = NI(I) = CI(I) = 2N-2-(N-2)ε 2-ε OPT(I) = N+1-ε

  34. More Lower Bounds:Double MST and Christofides Recall: Start by computing a minimum spanning tree [O(N2) time] Then add edges so that every vertex has even degree. Double MST Double the edges of the spanning tree [O(N) time] Christofides Add minimum-weight matching on odd-degree vertices [O(N3) time] Find an Euler tour for the resulting graph [O(N) time] Traverse the Euler tour, taking shortcuts to avoid revisiting vertices [O(N) time]

  35. More Lower Bounds: Double MST Algorithm MST Length =n + (n+1)(1-ε) + 2ε=2n + 1 – (n-1)ε DoubleMST Tour Length ∼ 2n + (2n)(1-ε) + 2ε = 4n – 2(n-1)ε OptimalTour Length ∼ 2n + 2

  36. More Lower Bounds: Christofides Algorithm 1 N cities on bottom, N+1 on top. Distance to nearest city in same row = 1. Distance to nearest city on other row = 1 – ε’ . OPT: Length = 2N + 1 - 2ε’ MST: Length = 2N(1 – ε’) Christofides: Length = 2N(1 – ε’) + N = 3N – 2Nε’

  37. Subquadratic Algorithms for Euclidean Instances: Strip • Let R be a minimum rectangle containing all the cities. • Partition R into floor(sqrt(N)/3) vertical strips. • Sort the cities within each strip by y-coordinate. • Starting with the bottom-most point in the leftmost strip, traverse the cities up one strip and down the next until all have been visited, and then return to the starting point. Total Running time O(Nlog(N)). ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂

  38. Lower Bounds for Strip Assuming neighboring points are one unit apart, OPT = N ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ Strip(IN) > sqrt(N)/3)(N/4) = Ω(sqrt(N)OPT) ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂ ∂∂∂

  39. Subquadratic Algorithms for Euclidean Instances: Spacefilling Curve Visit the cities in the order that they occur along a spacefilling curve for a square that contains them all. Wikipedia Entry for Spacefilling Curve Running time O(Nlog(N)). For details, see [Platzmann & Bartholdi, “Spacefilling curves and the planar travelling salesman problem,” J. ACM36 (1989), 719-737].

  40. More Geometric Tour Construction Heuristics • Insertion variants for geometric instances • Cheapest Insertion into the convex hull (CHCI) • “Greatest Angle” insertion into the convex hull (CHGA) • Convex Hull, Cheapest Insertion with greatest angle (CCA) • Double Strip (best of both horizontal and vertical strip tours) (DST) • Karp’s Partitioning Algorithm (KP) • Litke’s Clustering Algorithm • Bentley’s Fast Recursive Partitioning Heuristic (FRP)

  41. Performance “In Practice”Data Sources • Johnson, Bentley, McGeoch, & Rothberg, “Near-Optimal Solutions to Very Large Traveling Salesman Problems,” unpublished (and as-yet-uncompleted) monograph (1994). • Johnson & McGeoch, “The traveling salesman problem: A case study in local optimization,” chapter in Local Search in Combinatorial Optimization, Aarts & Lenstra (editors), Princeton University Press, Princeton, NJ, 2003, 215-310 [Also available on DSJ’s website]. • Johnson & McGeoch, “Experimental analysis of heuristics for the STSP,” chapter in The Traveling Salesman Problem and its Variations, Gutin & Punnen (editors), Kluwer Academic Publishers, Dordrecht, 2002, 369-443 [Also available on DSJ’s website]. • Website for the “8th DIMACS Implementation Challenge: The Traveling Salesman Problem” [http://dimacs.rutgers.edu/Challenges/TSP – only minor updates since 2002].

  42. Performance “In Practice” • Testbed 1: Random Euclidean Instances N = 10,000 (Results appear to be reasonably well-correlated with those for our real-world instances.)

  43. Performance “In Practice” • Testbed 2: Random Clustered Instances N = 1,000 N = 10,000 N = 3,162 Choose N/100 centers uniformly, then generate 100 normally-distributed points around each.

  44. Performance “In Practice” • Testbed 3: Instances from TSPLIB Printed Circuit Boards Geography Laser Logic

  45. Performance “In Practice” • Testbed 4: Random Symmetric Distance Matrices (Unlikely to obey Triangle Inequality.) Let’s start with Random Euclidean and just a few algorithms…

  46. Nearest Neighbor

  47. Greedy

  48. Smart-Shortcut Christofides

  49. Standard Shortcuts

More Related