1 / 161

Chapter 5

Chapter 5. Fundamental Techniques. Acknowledges. In addition to the textbook slides and my slides, I used some from Dr. Ying Lu of University of Nebraska at Lincoln, especially on dynamic programming solution of the 0/1 Knapsack Problem. We’ll look at 3 very fundamental design paradigms:.

Download Presentation

Chapter 5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5 Fundamental Techniques

  2. Acknowledges • In addition to the textbook slides and my slides, I used some from Dr. Ying Lu of University of Nebraska at Lincoln, especially on dynamic programming solution of the 0/1 Knapsack Problem.

  3. We’ll look at 3 very fundamental design paradigms: • Greedy method • Often used in problems involving • weighted graphs • data compression problems • Divide and conquer method • Already seen this is merge-sort and quick-sort • Here we’ll concentrate on the analyzing of problems solved by this method by solving recurrence relations • Dynamic programming • Very powerful technique IF we can build a certain characterization • Used in solving many problems that superficially do not seem to have much in common. There are other paradigms, but these are really quite basic.

  4. Note: • Because Microsoft PowerPoint is a pain to use for subscripts and superscripts, we will use the following convention often in these slides: • 1) When variables are single letters, such as x, we'll use xi to denote xi. • 2) When possible, exponentials will be marked with a ^, i.e. 2^(a + b) is 2a+b • For expressions involving logba, we’ll use log(b,a) i.e. what do you raise b to in order to obtain a.

  5. The Greedy Method

  6. The greedy method is a general algorithm design paradigm, built on the following elements: configurations: different choices, collections, or values to find an objective function: a score assigned to configurations, which we want to either maximize or minimize It works best when applied to problems with the greedy-choiceproperty: A globally-optimal solution can always be found by a series of local improvements from a starting configuration. The Greedy Method Technique- summary

  7. Problems That Can Be Solved by the Greedy Method • A game like chess can be won by thinking ahead. • But, a player focusing entirely on their immediate advantage is usually easy to defeat. • In some games, this is not the case. • For example, in Scrabble, the player can do quite well by simply making whatever move seems best at the moment and not worrying about future consequences.

  8. Problems That Can Be Solved by the Greedy Method • If this myopic behavior can be used, then it is easy to use and convenient. • Thus, when applicable, using the greedy method where algorithms are built up a solution piece by piece, can be quite attractive. • Although this technique can be quite disastrous for some computational tasks, there are many problems for which the technique yields an optimal algorithm.

  9. On each step in the algorithm, the choice must be: • Feasible - i.e. it satisfies the problems constraints • Locally optimal – i.e. it has to be the best local choice among all feasible choices available at the step • Irrevocable – i.e. once made, it cannot be changed on subsequent steps of the algorithm “Greed, for lack of a better word, is good! Greed is right! Greed works!” Gordon Gecko played by Michael Douglas in film Wall Street (1987)

  10. Theory Behind the Technique That Justifies It. • Actually rather sophisticated. • Based on an abstract combinatorial structure called a matroid. • We won’t go into that here, but, if interested, see Cormen, T.H., Leiserson, C.E., Rivest, R.L. and C. Stein, Introduction to Algorithms, 2nd edition, MIT Press, Cambridge, MA, 2001. Note: The above book is often used in many graduate level algorithm courses.

  11. When using a greedy algorithm, if we want to guarantee an optimal solution we must prove that our method of choosing the next item works. • There are times, as we will see later, when we are willing to settle for a good approximation to an optimal solution. • The greedy technique is often useful in those cases even when we don’t obtain optimality.

  12. Problem: A dollar amount to reach and a collection of coin amounts to use to get there. Configuration: A dollar amount to return to a customer plus the coins already returned Objective function: Minimize number of coins returned. Greedy solution: Always return the largest coin you can Example 1: Coins are valued $.32, $.08, $.01 Has the greedy-choice property, since no amount over $.32 can be made with a minimum number of coins by omitting a $.32 coin (similarly for amounts over $.08, but under $.32 etc.). Example 2: Coins are valued $.30, $.20, $.05, $.01 Does not have greedy-choice property, since $.40 is best made with two $.20’s, but the greedy solution will pick three coins (which ones?) Note that not all problems as posed above have the greedy solution. Example: Making Change

  13. The Fractional Knapsack Problem • Given: A set S of n items, with each item i having • bi - a positive benefit • wi - a positive weight • Goal: Choose items with maximum total value but with weight at most W. • The value of an item is its benefit/weight ratio. • If we are allowed to take fractional amounts, then this is the fractional knapsack problem. • In this case, we let xi denote the amount we take of item i 0  xi  wi • Objective: maximize • Constraint:

  14. 10 ml Example • Given: A set S of n items, with each item i having • bi - a positive benefit • wi - a positive weight • Goal: Choose items with maximum total value but with weight at most W. “knapsack” • Solution: • 1 ml of 5 • 2 ml of 3 • 6 ml of 4 • 1 ml of 2 Items: 1 2 3 4 5 Weight: 4 ml 8 ml 2 ml 6 ml 1 ml Benefit: $12 $32 $40 $30 $50 Value: 3 4 20 5 50 ($ per ml)

  15. The Fractional Knapsack Algorithm AlgorithmfractionalKnapsack (S,W) Input:set S of items with benefit biand weight wi; max. weight W Output:amount xi of each item i to maximize benefit with weight at most W for each item i in S xi 0 vi bi / wi{value} w 0 {total weight} whilew < W remove item i with highest vi xi min{wi , W - w} w  w + min{wi , W - w} • Greedy choice: Keep taking item with highest value (benefit to weight ratio bi / wi ) • Run time: O(n log n). Why? • Use a max-heap priority queue

  16. Need to Prove This Type of Strategy Works For This Type of Problem to Yield an Optimal Solution • Theorem: Given a collection S of n items, such that each item i has a benefit bi and a weight wi, we can construct a maximum-benefit subset of S, allowing for fractional amounts, that has a total weight W by choosing at each step xi of the item with the largest ratio bi/wi. The last choice usually will choose a fraction of the item. Moreover, this can be done in O(nlogn) time. • Proof: A maximum-benefit subset of S is one which maximizes

  17. Proof Continued • The fractional knapsack problem satisfies the greedy choice property using the algorithm given on slide 11 (Alg 5.1 in text). • Suppose there are two items, i and j, such that xi < wi , xj > 0, and vi > vj (see errata for the last inequality.) • Let y = min{wi - xi, xj} • We could then replace an amount y of item j with an equal amount of item i, thus increasing the total benefit without changing the total weight. • Therefore, we can compute optimal amounts for the items by greedily choosing items with the largest value index. • Using a max-heap priority queue, this can be clearly done in O(nlogn) time.

  18. 0/1 Knapsack • This is the case when either an entire item is not taken (0) or taken (1). • This problem does not have the greedy property. • As we will see, this is a much harder problem. • The Fractional Knapsack Problem has the greedy-choice property because on the last choice, a fraction of an item can be taken.

  19. Other Problems That Can Use the Greedy Method • There are many as we will see later: • Here are a few: • You are to network a collection of computers by linking selected pairs of them. Each link as a maintenance cost, reflected in a weight attached to the link. What is the cheapest possible network? • The MP3 audio compression scheme encodes a sound signal by using something called a Huffman encoding. In simple terms, given symbols A, B, C, and D, what is the shortest way that a binary string can encode the letters so any string can be decoded unambiguously?

  20. Other Problems That Use the Greedy Method • Horn formulas lie at the heart of the language Prolog ("programming by logic"). The workhorse of the Prolog interpreter is a greedy algorithm called the Horn Clause Satisfiability Algorithm. • Find the cheapest route from city A to city B given a cost associated with each road between various cities on a map - i.e. find the minimum-weight path between two vertices on a graph. • Change the last problem to ask for the minimum-weight path between A and every city reachable from A by a series of roads.

  21. Not Optimal, But a Good Approximation • Sometimes the greedy method can be used even when the greedy-choice property doesn't hold. • That will often lead to a pretty good approximation to the optimal solution. • An Example: A county is in its early stages of planning and deciding where to put schools. A set of towns is given with the distance between towns given by road length. There are two constraints: each school should be in a town (not in a rural area) and no one should have to travel more than 30 miles to reach one of the schools. What is the minimum number of schools needed.

  22. Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 Task Scheduling • Given: a set T of n tasks, each having: • A start time, si • A finish time, fi (where si < fi) • Goal: Perform all the tasks using a minimum number of “machines.” • Two tasks can execute on the same machine only if fi<=sj or fj <=si. (called non-conflicting)

  23. Example • Given: a set T of n tasks, each having: • A start time, si • A finish time, fi (where si < fi) • Goal: Perform all tasks on min. number of machines • Assume T is [4,7],[7,8],[1,4],[1,3],[2,5],[3,7],[6,9] Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 • Order by the start time: • [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8]

  24. Task Scheduling Algorithm AlgorithmtaskSchedule(T) Input:set T of tasks w/ start time siand finish time fi Output:non-conflicting schedule with minimum number of machines m 0 {no. of machines} whileT is not empty remove task i w/ smallest si if there’s a machine j for i then schedule i on machine j else m m + 1 schedule i on machine m • Greedy choice: consider tasks by their start time and use as few machines as possible with this order. • Run time: O(n log n). Why? • Correctness: Suppose there is a better schedule. • We can use k-1 machines • The algorithm uses k • Let i be first task scheduled on machine k • Machine i must conflict with k-1 other tasks • But that means there is no non-conflicting schedule using k-1 machines

  25. 7 2  9 4 2 4 7 9 7  2 2 7 9  4 4 9 7 7 2 2 9 9 4 4 Divide-and-Conquer

  26. Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two or more disjoint subsets S1, S2, … Recur: solve the subproblems recursively Conquer: combine the solutions for S1,S2, …, into a solution for S The base case for the recursion are subproblems of constant size Analysis can be done using recurrence equations Divide-and-Conquer

  27. Merge-sort on an input sequence S with n elements consists of three steps: Divide: partition S into two sequences S1and S2 of about n/2 elements each Recur: recursively sort S1and S2 Conquer: merge S1and S2 into a unique sorted sequence Merge-Sort AlgorithmmergeSort(S, C) Inputsequence S with n elements, comparator C Outputsequence S sorted • according to C ifS.size() > 1 (S1, S2)partition(S, n/2) mergeSort(S1, C) mergeSort(S2, C) Smerge(S1, S2)

  28. Recurrence Equation Analysis • The conquer step of merge-sort consists of merging two sorted sequences, each with n/2 elements and implemented by means of a doubly linked list, takes at most bn steps, for some constant b. • Likewise, the basis case (n< 2) will take at b most steps. • Therefore, if we let T(n) denote the running time of merge-sort: • We can therefore analyze the running time of merge-sort by finding a closed form solution to the above equation. • That is, a solution that has T(n) only on the left-hand side.

  29. Iterative Substitution • In the iterative substitution, or “plug-and-chug,” technique, we iteratively apply the recurrence equation to itself and see if we can find a pattern: • Note that base, T(n)=b, case occurs when 2i=n. That is, i = log n. • It looks like T(n) = bn + bnlogn is a possible closed form. • Thus, T(n) is O(n log n) if we can prove this equals the recurrence relation previously developed. How: by induction.

  30. Another approach- examine the recursion tree to find a closed form • Draw the recursion tree for the recurrence relation and look for a pattern: Total time = bn+ bn lgn (last level plus all previous levels)

  31. Still another method – “The Guess-and-Test Method” • In the guess-and-test method, we guess a closed form solution and then try to prove it is true by induction: • Guess: T(n) < cn log n for some c > 0 and n > n0 • Wrong: we cannot make this last line be less than cnlog n

  32. Guess-and-Test Method, Part 2 • Recall the recurrence equation: • Guess #2: T(n) < cn log2 n. If c > b, • So, T(n) is O(n log2 n) which can be proved by induction. • In general, to use this method, you need to have a good guess and you need to be good at induction proofs. • Note: This often doesn't produce an optimal class.

  33. The Master Method • Each of the methods explored in the earlier slides are very ad hoc. • They require some mathematical sophistication as well as the ability to do induction proofs easily. • There is a method, called the Master Method, which can be used for solving recurrence relations and does not require induction to prove what is correct. • The use of recursion trees and the Master Theorem are based on work by Cormen, Leiserson, and Rivest, Introduction to Algorithms, 1990, McGraw Hill • More methods are discussed in Aho, Hopcroft, and Ullman, Data Structures and Algorithms, Addison-Wesley, 1983

  34. Master Method • Many divide-and-conquer recurrence equations have the form: • The Master Theorem: Let f(n) and T(n) be defined as above.

  35. Using Master Method, Example 1 • The form: • The Master Theorem: • Example: Solution: Let a = 4, b = 2, =1, and f(n) = n. and f(n) is O(n) clearly. So, by Case 1 of the Master Method, T(n) is

  36. Master Method, Example 2 • The form: • The Master Theorem: • Example: Solution: Let a=2, b = 2, k =1, and f(n) = nlog n. is and,clearly, f(n) is Θ(nlogn). Thus, by Case 2 of the Master Method, T(n) is Θ(n log2 n).

  37. Master Method, Example 3 • The form: • The Master Theorem: • Example: Solution: Let a=1, b=3, ε =1, δ=1/3,and f(n) = n and f(n) = n is clearly in Ω(n). Moreover, af(n/3) = 1*n/3 = (1/3)n=(1/3)*f(n). Thus, the second condition is met. By the 3rd case of the Master Method, T(n) is Θ(n).

  38. Master Method, Example 4 • The form: • The Master Theorem: • Example: • Solve this one for homework.

  39. Master Method, Example 5 • The form: • The Master Theorem: • Example: • Solve this for homework.

  40. Master Method, Example 6 • The form: • The Master Theorem: • Example: (binary search) Solve for homework.

  41. Master Method, Example 7 • The form: • The Master Theorem: • Example: • Solve for homework. (heap construction)

  42. Iterative “Proof” of the Master Theorem • Using iterative substitution, let us see if we can find a pattern: • The last substitution comes from the identity a^logbn = n^logba. (thm 1.14.5, pg 23)

  43. Iterative “Proof” of the Master Theorem (Continued) • We then distinguish the three cases as • 1- The first term is dominant and f(n) is small. • 2- Each part of the summation is equally dominant and proportional to the others. Thus, T(n) is f(n) times a logarithmic factor. • 3- The summation is a geometric series with decreasing terms starting with f(n) and the first term is smaller than the second. Then T(n) is proportional to f(n).

  44. Proving the Master Theorem • The previous work just hints at the fact that the Master Theorem could be true. • An induction proof would be needed to prove it. • Because of the 3 cases and the complicated algebra, rather than rigorously proving the Master Theorem, we’ll utilize it to develop algorithms and assume it is true.

  45. Problem: Big Integer Multiplication • Problem: Given two n-bit integers, I and J, that can’t be handled by the hardware of a machine, devise an algorithm with good complexity that multiplies these two numbers. • Applications: Encryption schemes used in security work. • Note: Common grade school algorithm is Θ(n2) when multiplications are counted. • Can we do better? We will assume n is a power of 2; otherwise, pad with zeroes. • Note: This provides an alternate way of doing what we did in the first homework assignment which tacitly assumed the hardware could handle the products.

  46. Some Neat Observations: • Multiplying a binary number I by a power of two is trivial • Just shift left k bits for 2k. • So, assuming a shift takes constant time, multiplying a binary number by 2k can be done in O(k) time. • Notation: If we split an integer I into two parts, we let Ih be the high order bits and Il be the low order bits.

  47. Integer Multiplication • Algorithm: Multiply two n-bit integers I and J. • Divide step: Split I and J into high-order and low-order bits • We can then define I*J by multiplying the parts and adding: • We use this as a basis of a recursive algorithm.

  48. Idea of algorithm: • Divide the bit representations of I and J in half. • Recursively compute the 4 products of n/2 bits each as above and merge the solutions to these subproducts in O(n) time using addition and multiplication by powers of 2. • Terminate the recursion when we need to multiply two 1-bit numbers. • Recurrence relation for running time is • T(n) = 4T(n/2) + cn

  49. Complexity of T(n) • So, T(n) = 4T(n/2) + n, • Unfortunately, using The Master Theorem, we note log24 = 2 • So T(n) is Θ(n2)...no good! • That is no better than the algorithm we learned in grade school. • But, The Master Theorem tells us we can do better if we can reduce the number of recursive calls. • But, how to do that? Can we be REALLY clever?

  50. An Improved Integer Multiplication Algorithm • Algorithm: Multiply two n-bit integers I and J. • Divide step: Split I and J into high-order and low-order bits • Observe that there is a different way to multiply parts:

More Related