180 likes | 263 Views
Exploring Algorithms. Traveling Salesperson Problem I: Brute Force, Greedy, and Heuristics. Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Traveling Salesperson Problem.
E N D
Exploring Algorithms Traveling Salesperson Problem I: Brute Force, Greedy, and Heuristics Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.
Traveling Salesperson Problem The Traveling Salesperson Problem (TSP): he has the unfortunate job of traveling to 10 different towns in his area each month in order to deliver something important. Each town is a different distance away from his town and from each other town. How do you figure out a route that will minimize the distance traveled? http://creativecommons.org/licenses/by-nc/2.0/ Photo by: maureen sill
Brute Force One way to solve this problem (and any other NP-complete problem) is to enumerate all possible routes, of which there are 10! (3,628,800) for 10 towns, and then choose the shortest. This is a brute force algorithm. It would only take a couple seconds on a typical PC to compute all the possible routes and distances for 10 towns. And you would only need to do it once. http://creativecommons.org/licenses/by-nd/2.0/deed.en By:mellomango
Still More Ways to Solve It: Heuristics The 2-opt technique. We randomly pick two links between cities in our best random solution. We then remove those links and replace them with others that keep the route connected.
A Greedy Example A greedy algorithm is one where we make the best choice at each stage of an algorithm given the immediate information available. These choices do not take into account all the data available from all stages of the algorithm. Sometimes the immediate information is enough and an optimal solution is found; sometimes it's not enough, and non-optimal solutions are found.
Traveling Salesperson II: Divide and Conquer You may have seen this approach in binary search: int binsearch(int x, int v[], int low, int high)/* recursive binary search: find x in v[low]..v[high]; return index of location */{ int mid, loc; mid = (low + high) / 2; if (x == v[mid]) return(mid); else if ((x < v[mid]) && (low < mid)) return(binsearch(x, v, low, mid-1)); else if ((x > v[mid]) && (high > mid)) return(binsearch(x, v, mid+1, high)); else return(-1);}
TSP: Divide and Conquer And recursive sorting algorithms such as mergesort, or quicksort. MergeSort(L)if (length of L > 1) { Split list into first half and second half MergeSort(first half) MergeSort(second half) Merge first half and second half into sorted list}
TSP: Divide and Conquer There is a divide and conquer heuristic for TSP. This is a common approach for fleet and transportation companies who have to solve TSP all the time! Basically, they take the route map and divide the stops into smaller groups. Then, they build routes between these groups. Note that this application of TSP removes the requirement of visiting each stop exactly once. Streets often divide into regions naturally, particularly if a highway, river or some other natural barrier cuts through a region. It's also common to assign all the stops in a region to one driver. This gives the driver an opportunity to become familiar with the area, which increases speed and efficiency.
Traveling Salesperson III: Branch and Bound A search tree is a way of representing the execution of a backtracking algorithm. We start at the root and then add nodes to the tree to represent our exploration as we work towards a solution. If we find we reach a dead-end or the leaf of the tree, we backtrack to explore other paths. The classic example is a maze-search where the nodes generated from a parent represent an attempt to move up, down, right, or left.
Branch and Bound Searching Branch and bound searching is a variation of backtracking for problems where we are looking for an optimal solution. The trick is to calculate for each new node in a search tree a bound on what solutions this node will produce.
A Bounding Function This algorithm produces the best possible solution to the traveling salesperson problem. Since TSP is known to be NP-complete, no algorithm to solve TSP could run in polynomial time. Although this algorithm is not polynomial, it is usually much faster than simply exploring every possible circuit. In the worst case, however, this algorithm could be exponential and mirror a brute-force approach.
Dynamic Programming Techniques The Fibonacci Sequence is often used to illustrate the power of dynamic programming. The sequence is defined by the following recurrence relation: F0 = 0F1 = 1Fn = Fn-1 + Fn-2 This very easily translates into a recursive function: int Fibonacci(int n) { if ((n == 0) || (n == 1)) return n; else return (Fibonacci(n-1) + Fibonacci(n-2));}
What is the running time of Fibonacci()? Consider the call Fibonacci(4). Here is how Fibonacci would be written using dynamic programming: int fib(int n) { int f[n+1]; f[1] = f[2] = 1; for (int i = 3; i <= n; i++) f[i] = f[i-1] + f[i-2]; return f[n]; }
Problem #1 Let's start with a problem that allows us to use common algorithms. The problem is to find the kth smallest element in a list of integers. Here are some possible algorithms that solve this problem: • Sorting: We could just sort the list and then extract the kth smallest element from the start of the list. The running time for the most efficient sort is O( n log n ) (QuickSort, MergeSort, HeapSort). • We can iterate over the entire list keeping track of the smallest element found thus far. The running time for this is O( n ). • Do an incomplete selection sort. That is, find the smallest value and move it to the beginning of the list. Find the next smallest value and move it to the second position. Keep doing this until you have moved the kth element into position. The running time is O( k*n ). • Use the Hoare Selection Algorithm. This is based on QuickSort:
- function select(list, k, left, right) { - choose a pivot value list[pivotIndex]; - pivotNewIndex := partition(list, left, right, pivotIndex) • - if k = pivotNewIndex - return list[k] - else if k < pivotNewIndex - return select(list, k, left, pivotNewIndex-1) • - else - return select(list, k-pivotNewIndex, pivotNewIndex+1, • - right) - } • The running time is just like QuickSort which is usually O( n log n ), but can run in O( n2) time if bad partition values are consistently chosen. • Use a data structure, like a binary search tree to minimize the search time for the kth element. The running time here is O( log n ), but we have the overhead of inserting and deleting such that the tree remains balanced and in search tree form.
Problem #2 Here is an example of how binomial coefficients are used in combinatorics. Let's say there are n ice cream toppings to choose from. If one wishes to create an ice cream sundae with exactly k toppings, then the binomial coefficient expresses how many different types of such k-topping ice cream sundaes are possible.
We are interested in designing an algorithm to calculate a binomial coefficient. This time, we present the solutions in order of increasing efficiency. • Just apply the formula for C( n,k ): n! / ((n - k)! * k!). • If we only have to compute one binomial coefficient, brute force calculation works fine. Another idea, if we don't have to do the calculation often, is to use the recursive definition, assuming we have a factorial function: • choose(m, n) = fact( m ) / (fact( n ) * fact(m-n)) • As you can imagine, this is extremely slow, particularly if we use the recursive version of factorial! • If we have to do it frequently, we can compute Pascal's triangle once and do searches. This is a dynamic programming approach. • Finally, the most elegant solution of all is one defined by Lilavati, over 850 years ago. This runs in O( n ) time.