1.61k likes | 1.62k Views
Chapter 5. Fundamental Techniques. Acknowledgement. In addition to the textbook slides and my slides, I used some from Dr. Ying Lu of University of Nebraska at Lincoln, especially on dynamic programming solution of the 0/1 Knapsack Problem. We’ll look at 3 very fundamental design paradigms:.
E N D
Chapter 5 Fundamental Techniques
Acknowledgement • In addition to the textbook slides and my slides, I used some from Dr. Ying Lu of University of Nebraska at Lincoln, especially on dynamic programming solution of the 0/1 Knapsack Problem.
We’ll look at 3 very fundamental design paradigms: • Greedy method • Often used in problems involving • weighted graphs • data compression problems • Divide and conquer method • Already seen this is merge-sort and quick-sort • Here we’ll concentrate on the analyzing of problems solved by this method by solving recurrence relations • Dynamic programming • Very powerful technique IF we can build a certain characterization • Used in solving many problems that superficially do not seem to have much in common. There are other paradigms, but these are really quite basic.
Note: • Because Microsoft PowerPoint is a pain to use for subscripts and superscripts, we will often use the following convention often in these slides: • 1) When variables are single letters, such as x, I will use xi as an alternate notation for xi. • 2) Also, exponentials may be denoted with a ^, i.e. 2^(a + b) is an alternate notation for 2a+b • For expressions involving logba, I will also use the alternate notation of log(b,a) i.e. what do you raise b to in order to obtain a.
The greedy method is a general algorithm design paradigm, built on the following elements: configurations: different choices, collections, or values to find an objective function: a score assigned to configurations, which we want to either maximize or minimize It works best when applied to problems with the greedy-choiceproperty: A globally-optimal solution can always be found by a series of local improvements from a starting configuration. The Greedy Method Technique- summary
Problems That Can Be Solved by the Greedy Method • A game like chess can be won by thinking ahead. • But, a player focusing entirely on their immediate advantage is usually easy to defeat. • In some games, this is not the case. • For example, in Scrabble, the player can do quite well by simply making whatever move seems best at the moment and not worrying about future consequences.
Problems That Can Be Solved by the Greedy Method • If this myopic behavior can be used, then it is easy to use and convenient. • Thus, when applicable, using the greedy method where algorithms are built up a solution piece by piece, can be quite attractive. • Although this technique can be quite disastrous for some computational tasks, there are many problems for which the technique yields an optimal algorithm.
On each step in the algorithm, the choice must be: • Feasible - i.e. it satisfies the problems constraints • Locally optimal – i.e. it has to be the best local choice among all feasible choices available at the step • Irrevocable – i.e. once made, it cannot be changed on subsequent steps of the algorithm “Greed, for lack of a better word, is good! Greed is right! Greed works!” Gordon Gecko played by Michael Douglas in film Wall Street (1987)
Theory Behind the Technique That Justifies It. • Actually rather sophisticated. • Based on an abstract combinatorial structure called a matroid. • We won’t go into that here, but, if interested, see Cormen, T.H., Leiserson, C.E., Rivest, R.L. and C. Stein, Introduction to Algorithms, 2nd edition, MIT Press, Cambridge, MA, 2001. Note: The above book is often used in many graduate level algorithm courses, including our department.
When using a greedy algorithm, if we want to guarantee an optimal solution we must prove that our method of choosing the next item works. • There are times, as we will see later, when we are willing to settle for a good approximation to an optimal solution. • The greedy technique is often useful in those cases even when we don’t obtain optimality.
Problem: A dollar amount to reach and a collection of coin amounts to use to get there. Configuration: A dollar amount to return to a customer plus the coins already returned Objective function: Minimize number of coins returned. Greedy solution: Always return the largest coin you can Example 1: Coins are valued $.32, $.08, $.01 Has the greedy-choice property, since no amount over $.32 can be made with a minimum number of coins by omitting a $.32 coin (similarly for amounts over $.08, but under $.32 etc.). Example 2: Coins are valued $.30, $.20, $.05, $.01 Does not have greedy-choice property, since $.40 is best made with two $.20’s, but the greedy solution will pick three coins (which ones?) Note that not all problems as posed above have the greedy solution. Example: Making Change
The Fractional Knapsack Problem • Given: A set S of n items, with each item i having • bi - a positive benefit • wi - a positive weight • Goal: Choose items with maximum total value but with weight at most W. • The value of an item is its benefit/weight ratio. • If we are allowed to take fractional amounts, then this is the fractional knapsack problem. • In this case, we let xi denote the amount we take of item i 0 xi wi • Objective: maximize • Constraint:
10 ml Example • Given: A set S of n items, with each item i having • bi - a positive benefit • wi - a positive weight • Goal: Choose items with maximum total value but with weight at most W. “knapsack” • Solution: • 1 ml of 5 • 2 ml of 3 • 6 ml of 4 • 1 ml of 2 Items: 1 2 3 4 5 Weight: 4 ml 8 ml 2 ml 6 ml 1 ml Benefit: $12 $32 $40 $30 $50 Value: 3 4 20 5 50 ($ per ml)
The Fractional Knapsack Algorithm AlgorithmfractionalKnapsack (S,W) Input:set S of items with benefit biand weight wi; max. weight W Output:amount xi of each item i to maximize benefit with weight at most W for each item i in S xi 0 vi bi / wi{value} w 0 {total weight} whilew < W remove item i with highest vi xi min{wi , W - w} w w + min{wi , W - w} • Greedy choice: Keep taking item with highest value (benefit to weight ratio bi / wi ) • Run time: O(n log n). Why? • Use a max-heap priority queue
Need to Prove This Type of Strategy Works For This Type of Problem to Yield an Optimal Solution • Theorem: Given a collection S of n items, such that each item i has a benefit bi and a weight wi, we can construct a maximum-benefit subset of S, allowing for fractional amounts, that has a total weight W by choosing at each step xi of the item with the largest ratio bi/wi. The last choice usually will choose a fraction of the item. Moreover, this can be done in O(n log n) time. • Proof: A maximum-benefit subset of S is one which maximizes
Proof Continued • Proof: A maximum-benefit subset of S is one which maximizes • Suppose a better solution exists. Then there is an item j that could be chosen at step i whose contribution vjto the sum would have been higher than the chosen item i • That is the itermvixi = (bi/wi)xi < (bj/wj)/xi . = vjxi • This is impossible, as this would imply vi<vj, but vi=bi/wixiwas chosen to be the largest remaining ratio bi/wiat step i. • Therefore, we can compute optimal amounts for the items by greedily choosing the index whose item has the largest value. • Using a max-heap priority queue, this can be clearly done in O(n log n) time.
0/1 Knapsack • This is the case when either an entire item is not taken (0) or taken (1). • This problem does not have the greedy property. • As we will see, this is a much harder problem. • The Fractional Knapsack Problem has the greedy-choice property because on the last choice, a fraction of an item can be taken.
Other Problems That Can Use the Greedy Method • There are many as we will see later: • Here are a few: • You are to network a collection of computers by linking selected pairs of them. Each link has a maintenance cost, reflected in a weight attached to the link. What is the cheapest possible network? • The MP3 audio compression scheme encodes a sound signal by using something called a Huffman encoding. In simple terms, given symbols A, B, C, and D, what is the shortest way that a binary string can encode the letters so any string can be decoded unambiguously?
Other Problems That Use the Greedy Method • Horn formulas lie at the heart of the language Prolog ("programming by logic"). The workhorse of the Prolog interpreter is a greedy algorithm called the Horn Clause Satisfiability Algorithm. • Find the cheapest route from city A to city B given a cost associated with each road between various cities on a map - i.e. find the minimum-weight path between two vertices on a graph. • Change the last problem to ask for the minimum-weight path between A and every city reachable from A by a series of roads.
Not Optimal, But a Good Approximation • Sometimes the greedy method can be used even when the greedy-choice property doesn't hold. • That will often lead to a pretty good approximation to the optimal solution. • An Example: A county is in its early stages of planning and deciding where to put schools. A set of towns is given with the distance between towns given by road length. There are two constraints: each school should be in a town (not in a rural area) and no one should have to travel more than 30 miles to reach one of the schools. What is the minimum number of schools needed.
Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 Task Scheduling • Given: a set T of n tasks, each having: • A start time, si • A finish time, fi (where si < fi) • Goal: Perform all the tasks using a minimum number of “machines.” • Two tasks can execute on the same machine only if fi<=sj or fj <=si. (called non-conflicting)
Example • Given: a set T of n tasks, each having: • A start time, si • A finish time, fi (where si < fi) • Goal: Perform all tasks on min. number of machines • Assume T is [4,7],[7,8],[1,4],[1,3],[2,5],[3,7],[6,9] Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 • Order by the start time: • [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8]
Task Scheduling Algorithm AlgorithmtaskSchedule(T) Input:set T of tasks w/ start time siand finish time fi Output:non-conflicting schedule with minimum number of machines m 0 {no. of machines} whileT is not empty remove task i w/ smallest si if there’s a machine j for i then schedule i on machine j else m m + 1 schedule i on machine m • Greedy choice: consider tasks by their start time and use as few machines as possible with this order. • Run time: O(n log n). Why? • Correctness: Suppose there is a better schedule. • We can use k-1 machines • The algorithm uses k • Let i be first task scheduled on machine k • Task i must conflict with k-1 other tasks • But that means there is no non-conflicting schedule using k-1 machines
7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 Divide-and-Conquer
Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two or more disjoint subsets S1, S2, … Recur: solve the subproblems recursively Conquer: combine the solutions for S1,S2, …, into a solution for S The base case for the recursion are subproblems of constant size Analysis can be done using recurrence equations Divide-and-Conquer
Merge-sort on an input sequence S with n elements consists of three steps: Divide: partition S into two sequences S1and S2 of about n/2 elements each Recur: recursively sort S1and S2 Conquer: merge S1and S2 into a unique sorted sequence Merge-Sort AlgorithmmergeSort(S, C) Inputsequence S with n elements, comparator C Outputsequence S sorted • according to C ifS.size() > 1 (S1, S2)partition(S, n/2) mergeSort(S1, C) mergeSort(S2, C) Smerge(S1, S2)
Recurrence Equation Analysis • The conquer step of merge-sort consists of merging two sorted sequences, each with n/2 elements and implemented by means of a doubly linked list, takes at most bn steps, for some constant b. • Likewise, WLOG the basis case (n< 2) will take at most b steps. • Therefore, if we let T(n) denote the running time of merge-sort: • We can therefore analyze the running time of merge-sort by finding a closed form solution to the above equation. • That is, a solution that has T(n) only on the left-hand side.
Iterative Substitution • In the iterative substitution, or “plug-and-chug,” technique, we iteratively apply the recurrence equation to itself and see if we can find a pattern: • Note that base, T(n)=b, case occurs when 2i=n. That is, i = log n. • It looks like T(n) = bn + bn log(n) is a possible closed form. • Thus, T(n) is O(n log n) if we can prove this equals the recurrence relation previously developed. How: by induction.
Another approach- examine the recursion tree to find a closed form • Draw the recursion tree for the recurrence relation and look for a pattern: Total time = bn+ bn lgn (last level plus all previous levels)
Still another method – “The Guess-and-Test Method” • In the guess-and-test method, we guess a closed form solution for a recurrence relation and try to prove it is true by induction: Note: Changed “n < 2” in text to “n =1” to avoid errors. • Guess: T(n) < cn log n for some c > 0 and n > n0 • ERROR: We can’t make the last line less than cn log n
Guess-and-Test Method, Part 2 • Recall previous recurrence equation: • Guess #2: T(n) < cn log2 n for some c>0. Choosing c > b, • So, T(n) is O(n log2 n) which can be proved by induction. • In general, to use this method, you need to have a good guess and you need to be good at induction proofs. • Note: Often doesn't produce the optimal complexity class.
The Master Method • Each of the methods explored in the earlier slides are very ad hoc. • They require some mathematical sophistication as well as the ability to do induction proofs easily. • There is a method, called the Master Method, which can be used for solving recurrence relations and does not require induction to prove what is correct. • The use of recursion trees and the Master Theorem are based on work by Cormen, Leiserson, and Rivest, Introduction to Algorithms, 1990, McGraw Hill • More methods are discussed in Aho, Hopcroft, and Ullman, Data Structures and Algorithms, Addison-Wesley, 1983
Master Method • Many divide-and-conquer recurrence equations have the form: where a>0, c>0, b>1, and f(n) 0 for nd • Master Theorem: Let f(n) and T(n) be defined as above and >0 be a small constant
Using Master Method, Example 1 • The form: • The Master Theorem: • Example: Solution: Let a = 4, b = 2, =1, and f(n) = n. and f(n) is O(n) clearly. So, by Case 1 of the Master Method, T(n) is
Master Method, Example 2 • The form: • The Master Theorem: • Example: Solution: Let a=2, b = 2, k =1, and f(n) = nlog n. is and,clearly, f(n) is Θ(nlogn). Thus, by Case 2 of the Master Method, T(n) is Θ(n log2 n).
Master Method, Example 3 • The form: • The Master Theorem: • Example: Solution: Let a=1, b=3, ε =1, δ=1/3,and f(n) = n and f(n) = n is clearly in Ω(n). Moreover, af(n/3) = 1*n/3 = (1/3)n=(1/3)*f(n). Thus, the second condition is met. By the 3rd case of the Master Method, T(n) is Θ(n).
Master Method, Example 4 • The form: • The Master Theorem: • Example: • Solve this one for homework.
Master Method, Example 5 • The form: • The Master Theorem: • Example: • Solve this for homework.
Master Method, Example 6 • The form: • The Master Theorem: • Example: (binary search) Solve for homework.
Master Method, Example 7 • The form: • The Master Theorem: • Example: • Solve for homework. (heap construction)
Iterative “Proof” of the Master Theorem • Using iterative substitution, let us see if we can find a pattern: • The last substitution comes from the identity a^logbn = n^logba. (thm 1.14.5, pg 23)
Iterative “Proof” of the Master Theorem (Continued) • We then distinguish the three cases as • 1- The first term is dominant and f(n) is small. • 2- Each part of the summation is equally dominant and proportional to the others. Thus, T(n) is f(n) times a logarithmic factor. • 3- The summation is a geometric series with decreasing terms starting with f(n) and the first term is smaller than the second. Then T(n) is proportional to f(n).
Proving the Master Theorem • The previous work just hints at the fact that the Master Theorem could be true. • An induction proof would be needed to prove it. • Because of the 3 cases and the complicated algebra, rather than rigorously proving the Master Theorem, we’ll utilize it to develop algorithms and assume it is true.
Problem: Big Integer Multiplication • Problem: Given two n-bit integers, I and J, that can’t be handled by the hardware of a machine, devise an algorithm with good complexity that multiplies these two numbers. • Applications: Encryption schemes used in security work. • Note: Common grade school algorithm is Θ(n2) when multiplications are counted. • Can we do better? We will assume n is a power of 2; otherwise, pad with zeroes. • Note: This provides an alternate way of doing a faster way to multiply, which was discussed in the first set of slides (i.e., the “Introduction”)
Some Neat Observations: • Multiplying a binary number I by a power of two is trivial • Just shift left k bits for 2k. • So, assuming a shift takes constant time, multiplying a binary number by 2k can be done in O(k) time. • Notation: If we split an integer I into two parts, we let Ih be the high order bits and Il be the low order bits.
Integer Multiplication • Algorithm: Multiply two n-bit integers I and J. • Divide step: Split I and J into high-order and low-order bits • We can then define I*J by multiplying the parts and adding: • We use this as a basis of a recursive algorithm.
Idea of algorithm: • Divide the bit representations of I and J in half. • Recursively compute the 4 products of n/2 bits each as above and merge the solutions to these subproducts in O(n) time using addition and multiplication by powers of 2. • Terminate the recursion when we need to multiply two 1-bit numbers. • Recurrence relation for running time is • T(n) = 4T(n/2) + cn
Complexity of T(n) • So, T(n) = 4T(n/2) + n, • Unfortunately, using The Master Theorem, we note log24 = 2 • So T(n) is Θ(n2)...no good! • That is no better than the algorithm we learned in grade school. • But, The Master Theorem tells us we can do better if we can reduce the number of recursive calls. • But, how to do that? Can we be REALLY clever?
An Improved Integer Multiplication Algorithm • Algorithm: Multiply two n-bit integers I and J. • Divide step: Split I and J into high-order and low-order bits • Observe that there is a different way to multiply parts: