1 / 26

ALGORITHM TYPES

ALGORITHM TYPES. Greedy, Divide and Conquer, Dynamic Programming, Random Algorithms, and Backtracking. Note the general strategy from the examples. The classification is neither exhaustive (there may be more) nor mutually exclusive (one may combine).

dasha
Download Presentation

ALGORITHM TYPES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ALGORITHM TYPES Greedy, Divide and Conquer, Dynamic Programming, Random Algorithms, and Backtracking. Note the general strategy from the examples. The classification is neither exhaustive (there may be more) nor mutually exclusive (one may combine). We are now emphasizing design of algorithms, not data structures.

  2. In case the divide and conquer strategy can divide the problem at a very small level, and there are repetition of some calculation over some components, then one can apply a bottom up approach: calculate the smaller components first and then keep combining them until the highest level of the problem is solved. Draw the recursion tree of Fibonacci-series calculation, you will see example of such repetitive calculations f(n) = f(n-1) + f(n-2), for n>1; and f(n)=1 otherwise fib(n) calculation n = 1, 2, 3, 4, 5 fib = 1, 2, 3, 5, 8 PROBLEM 1: DYNAMIC PROGRAMMING STRATEGY

  3. DYNAMIC PROGRAMMING STRATEGY (Continued) Recursive fib(n) if (n<=1) return 1; else return (fib(n-1) + fib(n-2)). Time complexity: exponential, O(kn) for some k>1.0 Iterative fibonacci(n) fib(0) = fib(1) = 1; for i=2 through n do fib(i) = fib(i-1) + fib(i-2); end for; return fib(n). Time complexity: O(n), Space complexity: O(n)

  4. SpaceSaving-fibonacci(n) if (n<=1) return 1; int last=1, last2last=1, result=1; for i=2 through n do result = last + last2last; last2last = last; last=result end for; return result. Time complexity: O(n), Space complexity: O(1) DYNAMIC PROGRAMMING STRATEGY (Continued)

  5. The recursive call recalculates fib(1) 5 times, fib(2) 3 times, fib(3) 2 times - in fib(5) calculation. The complexity is exponential. In iterative calculation we avoid repetition by storing the needed values in variables - complexity of order n. Dynamic Programming approach consumes more memory to store the result of calculations of lower levels for the purpose of calculating the next higher level. They are typically stored in atable. DYNAMIC PROGRAMMING STRATEGY (Continued)

  6. Given a set of objects with (Weight, Profit) pair, and a Knapsack of limited weight capacity (M), find a subset of objects for the knapsack to maximize profit. DP uses a representation for optimal profit P(j, k): for the first j objects (in any arbitrary ordering of objects) with a variable knapsack limit k. Develop the table with rows j=1 through N (#objects), and for each row, go from k=0 through M (the KS limit). Finally P(N, M) gets the result. 0-1 Knapsack Problem (not in the book)

  7. 0-1 Knapsack Recurrence for bottom-up computing of optimum profit • The recurrence, for all k=0…M, j=0…N P(j, k) = (case 1) P(j-1, k), if wj>k, wj weight of the j-th object; else P(j, k) = (case2) max-between{P(j-1, k-wj)+pj, P(j-1, k)} • Explanation for the formula is quite intuitive • Recurrence terminates: P(0, k)=0, P(j, 0)=0, for all k’s and j’s

  8. 0-1 knapsack recursive algorithm Algorithm P(j, k) If j = = 0 or k = = 0 then return 0; //recursion termination else if Wj>k then return P(j-1, k) else return max{P(j-1, k-Wj) + pj, P(j-1, k)} End algorithm. Driver: call P(n, M), for given n objects & KS limit M. Complexity: ?

  9. 0-1 Knapsack DP-algorithm For all j, k, P(0, k) = P(j, 0) = 0; //initialize For j=1 to n do For k= 1 to m do if Wj>k then P(j, k) = P(j-1, k) else P(j, k)= max{P(j-1, k-Wj) + pj, P(j-1, k)} Complexity: O(NM), pseudo-polynomial because M is an input value. If M=30.5 the table would be of size O(10NM), or if M=30.54 it would be O(100NM).

  10. 0-1 Knapsack Problem (Example) Objects (wt, p) = {(2, 1), (3, 2), (5, 3)}. M=8 J| 0 1 2 3 4 5 6 7 8 O -- -- -- -- -- -- -- -- -- 1| 0 0 1 1 1 1 1 1 1 2| 0 0 1 2 2 3 3 3 3 3| 0 0 1 2 2 3 3 4 5 HOW TO FIND KNAPSACK CONTENT FROM TABLE? SPACE COMPLEXITY?

  11. Memoisation algorithm: 0-1 knapsack Algorithm P(j, k) If j = = 0 or k = = 0 then return 0; //recursion termination else if Wj>k then y=M(j-1, k); if y<0 {y = P(j-1, k) ; M(j-1, k)=y}; // P( ) is a recursive call return y else x=M(j-1, k-Wj); if x<0 {x = P(j-1, k-Wj) ; M(j-1, k-Wj)=x}; y=M(j-1, k); if y<0 {y = P(j-1, k) ; M(j-1, k)=y}; M(j, k) = max{x+ pj, y}; return max{x+ pj, y} End algorithm. Driver: Initialize a global matrix M(0->n, 0->M) with -1; call P(n, M) Complexity: ?

  12. A chain of matrices to be multiplied: ABCD, dimensions: A (5x1), B(1x4), C(4x3), and D(3x6). Resulting matrix will be of size (5x6). # scalar or integer multiplications for (BC), is 1.4.3, and the resulting matrix’s dimension is (1x3), 1 column and 3 rows. Problem 2: Ordering of Matrix-chain Multiplications

  13. Ordering of Matrix-chain Multiplications • There are multiple ways to order the multiplication: (A(BC))D, A(B(CD)), (AB)(CD), ((AB)C)D, and A((BC)D) • Resulting matrix would be the same • but the efficiency of calculation may vary drastically • Efficiency depends on #scalar multiplications • In the case of (A(BC))D, it is = 1.4.3 + 5.1.3 + 5.3.6 = 117 • In the case (AB)CD), it is = 4.3.6 + 1.4.6 + 5.1.6 = 126 • Our problem here is to find the best such ordering • An exhaustive search is too expensive - Catalan number involving n!

  14. For a sequence A1...(Aleft....Aright)...An, we want to find the optimal break point for the parenthesized sequence. Calculate for ( right-left+1) number of cases and find minimum: min{(Aleft...Ai) (Ai+1... Aright), with left i< right}, Recurrence for optimum scalar-mult: M(left, right) = min{M(left, i) + M(i+1, right) + rowleft .coli .colright, with left i < right}. Start the calculation at the lowest level with two matrices, AB, BC, CD etc. Then calculate for triplets, ABC, BCD etc. And so on… Recurrence for Ordering of Matrix-chain Multiplications

  15. Matrix-chain Recursive algorithm Recursive Algorithm M(left, right) if left >= right return 0 // Recurrence termination: no scalar-mult else return min{M(left, i) + M(i+1, right) + rowleft .coli .colright, for lefti<right}; end algorithm. Driver: call M(1, n) for the final answer.

  16. Matrix-chain DP-algorithm for all 1 j<i  n do M[i][j] = 0; //lower triangle 0 for size = 1 to n do // size of subsequence for l =1 to n-size+1 do r = left+size-1; //move along diagonal M[l][r] = infinity; //minimizer for i = l to r do x = M(l, i) + M(i+1, r) + rowleft .coli .colright; if x < M(l, r) then M(l, r) = x // Complexity?

  17. A1 (5x3), A2 (3x1), A3 (1x4), A4 (4x6). c(1,1) = c(2, 2) = c(3, 3) = c(4, 4) = 0 c(1, 2) = c(1,1) + c(2,2) + 5.3.1 = 0 + 0 + 15. c(1,3) = min{I=1 c(1,1)+c(2,3)+5.3.4, I=2 c(1,2)+c(3,3)+5.1.4 } = min{72, 35} = 35(2) c(1,4) = min{ I=1 c(1,1)+c(2,4)+5.3.6, I=2 c(1,2)+c(3,4)+5.1.6, I=3 c(1,3)+c(4,4)+5.4.6} = min{132, 69, 155} = 69(2) 69 comes from the break-point i=2: (A1.A2)(A3.A4) You may need to recursively break the sub-parts too, by looking at which value of i gave the min value at that stage, e.g., for (1,3) it was i=2: (A1.A2)A3 Ordering of Matrix-chain Multiplications (Example)

  18. j = 1 2 3 4 i 1 0 15 35(2) 69(2) 2 0 12 42 3 0 24 4 0 Calculation goes diagonally. COMPLEXITY? Ordering of Matrix-chain Multiplications (Example) Triplets Pairs

  19. Computing the actual break points j = 1 2 3 4 5 6 i 1 - - (2) (2) (3) (3) 2 - - (3) (3) (4) 3 - - (4) (4) 4 - (3) (4) 5 - - - 6 - - - ABCDEF -> (ABC)(DEF) -> ((AB)C) (D(EF))

  20. Problem 3: Optimal Binary Search Tree [REMIND BINARY-SEARCH TREE for efficiently accessing an element in an ordered list] Input: each key’s access frequency Cost for each key: frequency*access step to the key Access step for each node is its distance from the root plus one. Different tree organizations will have different aggregate access steps over all nodes, because the depths of the nodes are different on different tree. Problem: We want to find the optimal aggregate access steps, and a bin-search tree organization that produces it.

  21. Optimal Binary Search Tree (Continued) Say, the optimal cost for a sub-tree is C(left, right). Note, when this sub-tree is at depth one (under a parent, within another higher level sub-tree) then EACH node’s access step will increase by 1 in the higher level subtree.

  22. a1 a2 … aleft … ak … aright … an fk C(k+1, right) C(left, k-1)

  23. Recurrence for Optimal Binary Search Tree If the i-th node is at the root for this sub-tree, then C(left, right) = min[left i right] { f(i) + C(left, i-1) + C(i+1, right) + åj=lti-1f(j) + åj=i+1rt f(j)} [additional costs] = min[left  i  right] { C(left, i-1) + C(i+1, right) + åj=ltrtf(j)} We will use the above formula to compute C(1, n). We will start from one element sub-tree and finish with n element full tree.

  24. Optimal Binary Search Tree (Continued) Like matrix-chain multiplication-ordering we will develop a triangular part of the cost matrix (rt>= lt), and we will develop it diagonally(rt = lt + size) Note that our boundary condition is different now, c(left, right)=0 if left>right (meaningless cost) We start from, left=right: single node trees (not with left=right-1: pairs of matrices, as in matrix-chain-mult case). Also, i now goes from ‘left’ through ‘right,’ and i is excluded from both the subtrees’ C’s.

  25. Optimal Binary Search Tree (Example) Keys: A1(7), A2(10), A3(5), A4(8), and A5(4). C(1, 1)=7, C(2, 2)=10, …. C(1,2)=min{k=1C(1,0)+C(2,2)+f1+f2=0+10+17, k=2C(1,1) + C(3,2) + f1+f2 = 7 + 0 + 17} = min{27, 24} = 24 (i=2) j = 1 2 3 4 5 i 1 7 24(2) 34 55 67 2 0 10 20 41 51 3 0 5 18 26 4 0 8 16 5 0 4 This algorithm: O(n3). However, O(n2) is feasible.

  26. All Pairs Shortest Path A variation of Djikstra’s algorithm. Called Floyd-Warshal’s algorithm. Good for dense graph. Algorithm Floyd Copy the distance matrix in d[1..n][1..n]; for k=1 through n do //considr each vertex as updating candidate for i=1 through n do for j=1 through n do if (d[i][k] + d[k][j] < d[i][j]) then d[i][j] = d[i][k] + d[k][j]; path[i][j] = k; // last updated via k End algorithm. O(n3), for 3 loops.

More Related