1 / 87

Design and Analysis of Algorithms

Design and Analysis of Algorithms. Dr. Muhammad Safysn Spring 2019. Introduction. The word Algorithm comes from the name of the Muslim author Abu Ja’far Mohammad ibn Musa al-Khowarizmi .

dbarela
Download Presentation

Design and Analysis of Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Analysis of Algorithms Dr. Muhammad Safysn Spring 2019

  2. Introduction • The word Algorithm comes from the name of the Muslim author Abu Ja’far Mohammad ibn Musa al-Khowarizmi. • He was born in the eighth century at Khwarizm (Kheva), a town south of river Oxus in present Uzbekistan. • Al-Khwarizmi parents migrated to a place south of Baghdad when he was a child. • It has been established from his contributions that he flourished under Khalifah Al-Mamun at Baghdad during 813 to 833 C.E. Al-Khwarizmi died around 840 C.E.

  3. Definition • An algorithm is a mathematical entity, which is independent of a specific programming language, machine, or compiler. • Design is all about the mathematical theory behind the design of good programs. • Multi-facets to good program design • Aware of programming and machine issues as well

  4. Analyzing Algorithms • What is criteria for Good Algorithm? • What is criteria of measuring Algorithm? • Measure algorithms in terms of the amount of computational resources that the algorithm requires. • it is also known as complexityanalysis • Primary Resouces are (i) Runing Time and (ii) Memory • Other Includes: number disk accesses in a database program , communication bandwidth in a networking application.

  5. Computing resources: e Memory space: space required to store the data processed by algorithm e CPU time: also known as running time. It is time needed to execute the operations of the algorithm. An efficient algorithm uses a minimum amount of computing resources.

  6. EfficiencyAnalysis e There are two types of efficiency analysis ) Space analysis: how much space an algorithm required to store the data in memory? ) Running time analysis: it refers to how fast an algorithm runs e The key strategy in efficiency analysis is the amount of computing resources depends on the input size (problem size) ) Input size: the number of elements belonging to the input data. e Hence the main question to be answered by efficiency analysis is how depends the time and/or space needed by algorithm on the input size?

  7. Space and time tradeoff e Often we have to make a compromise between space efficiency and time efficiency e Example: adjacency matrix vs adjacency list for graph representation. ) A sparse graph can be represented by an adjacency matrix which would be time efficient for traversing an edge but it will be at the cost of space ) A sparse graph can be represented by an adjacency list which would be space efficient and but it would take longer time for traversing the edges.

  8. How can be time efficiency measured? Our efficiency measure for running time must be independent of eProgrammer e Programminglanguage eMachine

  9. RAM: ComputationalModel However to estimate running time we must use some computational model. Computational model is an abstract machine having some properties. For this purpose we will use RAM computational model(Random Access Machine) having the following properties • All processing steps are sequentially executed (there is no parallelism in the execution of the algorithm)

  10. RAM: ComputationalModel • All processing steps are sequentially executed (there is no parallelism in the execution of the algorithm) • The time of executing the basic operations does not depend on the values of the operands (there is no time difference between computing 1+2 and computing 12433+4567) • The time to access data does not depend on their address (there are no differences between processing the first element of an array and processing the last element) All basic operations, assignment, arithmetic, logical, relational take unittime.

  11. Loop-holes • two numbers may be of any length. • Serlization • .

  12. Running Time Analysis • Concern about measuring the execution time. • Concerned about the space (memory) required by the algorithm. • Ignore during Running Time • Speed of computer • Programming Language • optimization by the compiler • .Count • Different inputs of the same size may result in different running time

  13. Running Time Analysis • Two criteria for measuring running time are worst-case time and average-case time • Worst-case time • maximum running time over all (legal) inputs of size n. Let I denote an input instance, let |I| denote its length, and let T(I) denote the running time of the algorithm on input I. Then • Average-case time is the average running time over all inputs of size n. Let p(I) denote the probability of seeing this input. The average-case time is the weighted sum of running times with weights being the probabilities: • We will almost always work with worst-case time • Average-case time is more difficult to compute;

  14. Running Time Analysis

  15. Running Time Analysis

  16. Example1: runningtime Now we see how running time expresses the dependence of the number of executed operations on the input size Example: swapping iterations cost aux ← x x ← y 1 1 1 c1 c1 c1 y ←x T (n) = 3 c1 where c1 is some constant and 3c1 is also some constant. We conclude that running time of this algorithm is some constant. Hence we can understand that the running time of above algorithm is independent of the input size.

  17. Example2: runningtime Example 2: Compute sum of the series s = 1 + 2 + . . . + n precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . + n input: n 1: s ← 0 2: i ← 1 3: while i ≤ n do 4: s ← s + i 5: i ← i + 1 6: end while

  18. Example2: runningtime Example 2: Compute sum of the series s = 1 + 2 + . . . + n precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . + n input: n 1: s ← 0 2: i ← 1 3: while i ≤ n do 4: s ← s + i 5: i ← i + 1 6: end while

  19. Example2: runningtime Example 2: Compute sum of the series s = 1 + 2 + . . . + n precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . + n input: n 1: s ← 0 2: i ← 1 3: while i ≤ n do 4: s ← s + i 5: i ← i + 1 6: end while

  20. Example2: runningtime Example 2: Compute sum of the series s = 1 + 2 + . . . + n precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . + n input: n 1: s ← 0 2: i ← 1 3: while i ≤ n do 4: s ← s + i 5: i ← i + 1 6: end while

  21. Example2: runningtime Example 2: Compute sum of the series s = 1 + 2 + . . . + n precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . + n input: n 1: s ← 0 2: i ← 1 3: while i ≤ n do 4: s ← s + i 5: i ← i + 1 6: end while Running time is linear function of input size

  22. Example3: runningtime Example 3: Find the minimum in non-empty array x [1 . . . n] P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x [1..n] lineno.cost iterations 1: m ← x[1] 1 2 3 4 1 1 1 1 1 2n 2: for i ← 2, ndo 3: if x [i ] < m then n −1 h(n) m←x[i] 4: 5: endif 6: endfor 7: returnm ../pucitlogo.jp

  23. Example3: runningtime Example 3: Find the minimum in non-empty array x [1 . . . n] P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x [1..n] 1: m ← x [1] 2: for i ← 2, n do 3: if x [i ] < m then m←x[i] 4: T(n)= 1+2n+n−1+h(n) 5: endif 6: endfor 7: returnm

  24. Example3: runningtime Example 3: Find the minimum in non-empty array x [1 . . . n] P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x [1..n] 1: m ← x [1] 2: for i ← 2, n do 3: if x [i ] < m then m←x[i] 4: T(n)= 1+2n+n−1+h(n) T(n)= 3n +h(n) 5: endif 6: endfor 7: returnm

  25. Example3: runningtime Example 3: Find the minimum in non-empty array x [1 . . . n] P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x [1..n] 1: m ← x [1] 2: for i ← 2, n do 3: if x [i ] < m then m←x[i] 4: T(n)= 1+2n+n−1+h(n) T(n)= 3n +h(n) 5: endif 6: endfor 7: returnm Therunningtimedependsnotonlyonnbutalsoontheproperties of inputdata ../pucitlogo.jp

  26. Best-case analysis and worst-caseanalysis Whenever analysis of an algorithm not only depends on the input size but also on some property of input data then we have to perform analysis in more details e Worst-case analysis: gives the longest running time for any input of size n. e Best-case analysis: gives us the minimum running time for any input of size n. ) worst-case running time of an algorithm gives us an upper bound on the running time for any input. ) Knowing it provides a guarantee that the algorithm will never take any longer. e best-case running time of an algorithm gives us lower bound on the running time for any input. e Knowing it provides a guarantee that the algorithm will never tak..e/l more less time.

  27. Example 4: sequentialsearch Preconditions: x [1..n], n >= 1, v a value Postconditions: found = TRUE when v ∈ x [1..n] Input: x [1..n], v Algorithm 1 search 1: found ← true 2: i ← 1 3: while (found = false) and (i ≤ n) do 4: if x [i ] = v then found←true 5: 6: else i ← i +1 7: 8: endif 9: endwhile

  28. Example 4: sequentialsearch Preconditions: x [1..n], n >= 1, v a value Postconditions: found = TRUE when v ∈ x [1..n] Input: x [1..n], v Algorithm 2search lineno cost 1: found ←true 2: i ←1 1 2 3 4 5 7 1 1 f (n) +1 f(n) g(n) h(n) 3: while (found = false) and (i ≤n) do 4: if x [i ] = vthen found←true 5: 6: else i ← i +1 7: T(n)= 3+2f(n)+g(n)+h(n) 8: endif 9: endwhile

  29. Example 4: sequentialsearch Preconditions: x [1..n], n >= 1, v a value Postconditions: found = TRUE when v ∈ x [1..n] Input: x [1..n], v Algorithm 3search lineno cost 1: found ←true 2: i ←1 1 2 3 4 5 7 1 1 f (n) +1 f(n) g(n) h(n) 3: while (found = false) and (i ≤n) do 4: if x [i ] = vthen found←true 5: 6: else i ← i +1 7: T(n)= 3+2f(n)+g(n)+h(n) 8: endif 9: endwhile

  30. Today’s Agenda Role of dominant term Order of growth Asymptotic Analysis

  31. Dominant term or Leadingterm We used some simplifying abstractions to ease our analysis The main aim of efficiency analysis is to find out how increases the running time when the problem size increases e Running time is mostly affected by dominant term e In most of the cases we do not require the detailed analysis of running time e We need to just identify the dominant term, which helps to find: ) The order of growth of the running time ) The efficiency class to which an algorithm belongs

  32. Identify the dominantterm In the expression of the running time one of the terms will become significantly larger than the other ones when n becomes large:this is the so-called dominant term Running time Dominanat term T1(n) = an + b an T2(n) = alogn + b T3(n) = an2 alogn an2 an T4(n)=an+bn+c(a>n)

  33. What is the order ofgrowth? The order of growth expresses how increases the dominant term of the running time with the input size an T4(n)=an+bn+c(a>n) Exponential

  34. Order of growth vs Inputsize? Between two algorithms it is considered that the one having a smaller order of growth is more efficient this is true only for large enough input sizes Example: T1(n) = 10n + 10 (linear order of growth) T2(n) = n2 (quadratic order of growth)

  35. Order of growth vs Inputsize? Between two algorithms it is considered that the one having a smaller order of growth is more efficient this is true only for large enough input sizes Example: T1(n) = 10n + 10 (linear order of growth) T2(n) = n2 (quadratic order of growth) if n ≤ 10 then T1(n) > T2(n) e In this case the order of growth is relevant only for n > 10 e For larger input,n the low terms in a function are relatively insignificant

  36. Order of growth vs Inputsize? Between two algorithms it is considered that the one having a smaller order of growth is more efficient this is true only for large enough input sizes Example: T1(n) = 10n + 10 (linear order of growth) T2(n) = n2 (quadratic order of growth) if n ≤ 10 then T1(n) > T2(n) e In this case the order of growth is relevant only for n > 10 e For larger input,n the low terms in a function are relatively insignificant e n4 + 100n2 + 10n + 50 ≈ n4

  37. A comparison of the order ofgrowth

  38. A comparison of the order ofgrowth

  39. Comparing order ofgrowth The order of growth of two running times T1(n) and T2(n) can be compared by computing the limit of T1(n)/T2(n) when n goes to infinity. e If the limit is 0 then T1(n) has a smaller order of growth than T2(n) e If the limit is a finite constant c(c > 0) then T1(n) and T2(n) have the same order of growth e If the limit is infinity then T1(n) has a larger order of growth than T2(n) .logo.jp

  40. AsymptoticAnalysis e While analysis of running time often extra precision is not required e For large enough input, the multiplicative constants and lower order terms can be ignored e When we look at input size large enough to make only the order of growth of the running time relevant, its called asymptotic efficiency of algorithms e That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. ..logo.jp

  41. AsymptoticAnalysis e Asymptotic notation actually applies to functions. ) Recall that we characterized running time of a matrix initialization as an2 + bn + c. ) By writing writing running time of above function as O(n2), we have abstracted some details of the function. e While applying asymptotic notations to running time, we need to understand which running times of algorithms. ) Sometimes we are interested in worst-case running time ) Often we wish to characterize running time no matter what the input ogo.jp

  42. AsymptoticNotations e Big O: asymptotic lessthan )f(n)=O(g(n))⇒f(n)≤g(n) e Big Ω: asymptotic greaterthan )f(n)=Ω(g(n))⇒f(n)≥g(n) e Big Θ: asymptoticequality )f(n)=Θ(g(n))⇒f(n)=g(n) ../pucitlogo.jp

  43. Big-ONotation e We say f (n) = 30n + 8 is order n, or O(n) it is, at most, roughly proportional to n. e We say g (n) = n2 + 1 is order n2, or O(n2) it is, at most, roughly proportional to n2. ../pucitlogo.jp

  44. Big-O Notation • 104 1 f(n) g(n) 0.8 0.6 0.4 0.2 T(n) 0 ../pucitlogo.jp 0 20 40 60 80 100 n

  45. Big-O:Example e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no =2 ../pucitlogo.jp

  46. Big-O:Example e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no =2 e n2 = O(n2): n2 ≤ cn2 ⇒ 1 ≤ c ⇒ c = 1 and no =1

  47. Big-O:Example e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no =2 e n2 = O(n2): n2 ≤ cn2 ⇒ 1 ≤ c ⇒ c = 1 and no =1 e 1000n2 + 1000n = O(n2): 1000n2 + 1000n ≤ cn2⇒ 1000n2 + 1000n ≤ 1001n2 ⇒ c = 1001 and no =1000 .ogo.jp

  48. Big-O: FormalExample Show that 30n + 8 is O(n) For this we have to prove that • ∃c, no : 30n + 8 ≤ cn, ∀n > no • Let c = 31, no = 8. Assume n > no then 30n + 8 ≤ 30n + n • e There is no unique set of values for no and c in proving the asymptotic bounds • e Must find some constants c and no that satisfy the asymptotic notation relation Proof: ../pucitlogo.jp

  49. Big-ΩNotation Ω(g (n)) is the set of functions with larger or same order of growth as g (n) ../pucitlogo.jp

  50. Example 5n2 = Ω(n) ∃c, no such that 0 ≤ cn ≤ 5n2 ⇒ cn ≤ 5n2 ⇒ c = 1 and no = 1 100n + 5 ƒ= Ω(n2) ∃c, no such that 0 ≤ cn2 ≤ 100n + 5 100n + 5 ≤ 100n + 5n(∀n ≥ 1) = 105n cn2 ≤ n ⇒ n(cn − 105) ≤ 0 since n is positive ⇒ cn − 105 ≤ 0 ⇒ n ≤ 105/c ⇒ contradiction: n cannot be smaller than constant. ../pucitlogo.jp

More Related