1 / 71

Discrete Maths

Discrete Maths. 242-213 , Semester 2, 2013-2014. Objective to describe the Big-Oh notation for estimating the running time of programs . 10 . Running Time of Programs. Overview. Running Time Big-Oh and Approximate Running Time Big-Oh for Programs Analyzing Function Calls

finn
Download Presentation

Discrete Maths

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discrete Maths 242-213, Semester 2,2013-2014 • Objective • to describe the Big-Oh notation for estimating the running time of programs 10. Running Time of Programs

  2. Overview • Running Time • Big-Oh and Approximate Running Time • Big-Oh for Programs • Analyzing Function Calls • Analyzing Recursive Functions • Further Information

  3. 1. Running Time • What is the running time of this program? void main(){ int i, n; scanf("%d", &n); for(i=0; i<n; i++) printf("%d"\n", i);} continued

  4. There is no single answer! • the running time depends on the size of the n value • Instead of a time answer in seconds, we want a time answer which is related to the size of the input. continued

  5. For example: • programTime(n) = constant * n • this means that as n gets bigger, so does the program time • running time is linearly related to the input running time constant * n size of n

  6. Running Time Theory • A program/algorithm has a running time T(n) • n is some measure of the input size • T(n) is the largest amount of time the program takes on any input of size n • Time units are left unspecified. continued

  7. A typical result is: • T(n) = c*n, where c is some constant • but often we just ignore c • this means the program has linear running time • T(n) values for different programs can be used to compare their relative running times • selection sort: T(n) = n2 • merge sort: T(n) = n log n • so, merge sort is “better” for larger n sizes

  8. 1.1. Different Kinds of Running Time • Usually T(n) is the worst-case running time • the maximum running time on any input of size n • Tavg(n) is the average running time of the program over all inputs of size n • more realistic • very hard to calculate • not considered by us

  9. 1.2. T(n) Example • Loop fragment for finding the index of the smallest value in A[] array of size n: (2) small = 0;(3) for(j = 1; j < n; j++)(4) if (A[j] < A[small])(5) small = j; • Count each assignment and test as 1 “time unit”.

  10. Calculation • The for loop executes n-1 times • each loop carries out (in the worse case) 4 ops • test of j < n, if test, small assign, j increment • total loop time = 4(n-1) • plus 3 ops at start and end • small assign (line 2), init of j (line 3), final j < n test • Total time T(n) = 4(n-1) + 3 = 4n -1 • running time is linear with the size of the array

  11. 1.3. Comparing Different T()’s 20000 Tb(n) = 2n2 • If input size < 50, program B is faster. • But for large n’s, which are more common in real code, program B gets worse and worse. 15000 T(n)value 10000 Ta(n) = 100n 5000 input size n 20 40 60 80 100

  12. 1.4. Common Growth Formulae & Names • Formula (n = input size) Name n linear n2 quadratic n3 cubic nm polynomial, e.g. n10 mn ( m >= 2) exponential, e.g. 5n n! factorial 1 constant log n logarithmic n log n log log n

  13. 1.5. Execution Times Assume 1 instruction takes 1 microsec (10-6 secs) to execute. How long will n instructions take? • 3 9 50 100 1000 106n 3 9 50 100 1ms 1secn2 9 81 2.5ms 10ms 1sec 12 daysn3 27 729 125ms 1sec 16.7 min 31,710yr2n 8 512 36yr 4*1016 yr 3*10287 yr 3*10301016yrlog n 2 3 6 7 10 20 n (no. of instructions) growth formula T() if n is 50, you will wait 36 years for an answer!

  14. Notes • Logarithmic running times are best. • Polynomial running times are acceptable, if the power isn’t too big • e.g. n2 is ok, n100 is terrible • Exponential times mean sloooooooow code. • some size problems may take longer to finish than the lifetime of the universe!

  15. 1.6. Why use T(n)? • T() can guide our choice of which algorithm to implement, or program to use • e.g. selection sort or merge sort? • T() helps us look for better algorithms in our own code, without expensive implementation, testing, and measurement.

  16. 2. Big-Oh and Approximate Running Time • Big-Oh mathematical notation simplifies the process of estimating the running time of programs • it uses T(n), but ignores constant factors which depend on compiler/machine behaviour continued

  17. The Big-Oh value specifies running time independent of: • machine architecture • e.g. don’t consider the running speed of individual machine operations • machine load (usage) • e.g. time delays due to other users • compiler design effects • e.g. gcc versus Borland C

  18. Example • In the code fragment example on slide 9, we assumed that assigment and testing takes 1 “time unit”. This means: T(n) = 4n -1 • The Big-Oh value, O(), uses the T(n) value but ignores constants (which will actually vary from machine to machine). This means: T(n) is O(n) we say "T(n) is order n"

  19. More Examples • T(n) valueBig Oh value: O() • 10n2+ 50n+100 O(n2) • (n+1)2 O(n2) • n10 O(2n) • 5n3 + 1 O(n3) • These simplifications have a mathematical reason, which is explained in section 2.2. hard to understand

  20. 2.1. Is Big-Oh Useful? • O() ignores constant factors, which means it is a more reliable measure across platforms/compilers. • It can be compared with Big-Oh values for other algorithms. • i.e. linear is better than polynomial and exponential, but worse than logarithmic

  21. 2.2. Definition of Big-Oh • The connection between T() and O() is: • when T(n) is O( f(n) ), it means that f(n) is the most important thing in T() when n is large • More formally, for some integer n0 and constant c > 0 • T(n) is O( f(n) ),if for all integers n >= n0, T(n) <= c*f(n) • n0 and c are called witnesses to the relationship: T(n) is O( f(n) )

  22. Informally, the n2 part is the most important thing in the T() function Example 1 • T(n) = 10n2 + 50n + 100 • which allows that T(n) is O(n2) • Why? • Witnesses: n0 = 1, c = 160 • then T(n) <= c*f(n), n >= 1so 10n2 + 50n + 100 <= 160 n2since 10n2 + 50n + 100 <= 10n2 + 50n2 + 100n2 <= 160 n2

  23. Example 2 • T(n) = (n+1)2 • which allows that T(n) is O(n2) • Why? • Witnesses: n0 = 1, c = 4 • then T(n) <= c*f(n), n >= 1so (n+1)2 <= 4n2since n2 + 2n + 1 <= n2 + 2n2 + n2 <= 4n2

  24. Example 3 • T(n) = n10 • which allows that T(n) is O(2n) • Why? • Witnesses: n0 = 64, c = 1 • then T(n) <= c*f(n), n >= 64so n10 <= 2nsince 10*log2 n <= n (by taking log2s) which is true when n >= 64 (10*log2 64 == 10*6; 60 <= 64)

  25. 2.4. Some Observations about O() • When choosing an O() approximation to T(), remember that: • constant factors do not matter • e.g. T(n) = (n+1)2 is O(n2) • low-order terms do not matter • e.g. T(n) = 10n2 + 50n + 100 is O(n2) • there are many possible witnesses

  26. 3. Big-Oh for Programs • First decide on a size measure for the data in the program. This will become the n. • Data Type Possible Size Measure integer its valuestring its lengtharray its length

  27. 3.1. Building a Big-Oh Result • The Big-Oh value for a program is built up inductively by: • 1) Calculate the Big-Oh’s for all the simple statements in the program • e.g. assignment, arithmetic • 2) Then use those value to obtain the Big-Oh’s for the complex statements • e.g. blocks, for loops, if-statements

  28. Simple Statements (in C) • We assume that simple statements always take a constant amount of time to execute • written as O(1) • Kinds of simple statements: • assignment, break, continue, return, all library functions (e.g. putchar(),scanf()), arithmetic, boolean tests, array indexing

  29. Complex Statements • The Big-Oh value for a complex statement is a combination of the Big-Oh values of its component simple statements. • Kinds of complex statements: • blocks { ... } • conditionals: if-then-else, switch • loops: for, while, do-while continued

  30. 3.2. Structure Trees • The easiest way to see how complex statement timings are based on simple statements (and other complex statements) is by drawing a structure tree for the program.

  31. Example: binary conversion void main() { int i;(1) scanf(“%d”, &i);(2) while (i > 0) {(3) putchar(‘0’ + i%2);(4) i = i/2; }(5) putchar(‘\n’); }

  32. Structure Tree for Example block1-5 1 5 while2-4 the time for this is the time for (3) + (4) block3-4 3 4

  33. 3.3. Details for Complex Statements • Blocks: Running time bound = summation of the bounds of its parts. • The summation rule means that only the largest Big-Oh value is considered. "summation" means 'add'

  34. Block Calculation Graphically O( f1(n) ) summation rule O( f2(n) ) O( f1(n) + f2(n) + ... + fk(n)) In other words: O( largest fi(n) ) O( fk(n) )

  35. Block Summation Rule Example • First block's time T1(n) = O(n2) • Second block's time T2(n) = O(n) • Total running time = O(n2 + n) = O(n2) the largest part

  36. Conditionals e.g. if statements, switches • Conditionals: Running time bound = the cost of the if-test + larger of the bounds for the if- and else- parts • When the if-test is a simple statement (a boolean test), it is O(1).

  37. Conditional Graphically O(1) Test O( max( f1(n), f2(n)) +1 ) which is the same as O( max( f1(n), f2(n)) ) IfPart ElsePart O( f1(n) ) O( f2(n) )

  38. If Example • Code fragment: if (x < y) // O(1) foo(x); // O(n)else bar(y); // O(n2) • Total running time = O( max(n,n2) + 1) = O(n2 + 1) = O(n2)

  39. Loops • Loops: Running time bound is usually = the max. number of times round the loop * the time to execute the loop body once • But we must include O(1) for the increment and test each time around the loop. • Must also include the initialization and final test costs (both O(1)).

  40. While Graphically Altogether this is:O( g(n)*(f(n)+1) + 1 ) which can be simplified to: O( g(n)*f(n) ) Test O(1) At mostg(n) timesaround O( g(n)*f(n) ) Body O( f(n) )

  41. While Loop Example • Code fragment: x = 0;while (x < n) { // O(1) for test foo(x, n); // O(n2) x++; // O(1)} • Total running time of loop: = O( n*( 1 + n2 + 1) + 1 ) = O(n3 + 2n + 1) = O(n3)

  42. For-loop Graphically O(1) Initialize Test O( g(n)*(f(n)+1+1) + 1) which can be simplified to: O( g(n)*f(n) ) O(1) At mostg(n) timesaround Body O( f(n) ) Increment O(1)

  43. For Loop Example • Code Fragment: for (i=0; i < n; i++) foo(i, n); // O(n2) • It helps to rewrite this as a while loop: i=0; // O(1)while (i < n) { // O(1) for test foo(i, n); // O(n2) i++; // O(1)} continued

  44. Running time for the for loop: = O( 1 + n*( 1 + n2 + 1) + 1 ) = O( 2 + n3 + 2n ) = O(n3)

  45. 3.4.1. Example: nested loops (1) for(i=0; i < n; i++)_ (2) for (j = 0; j < n; j++)(3) A[i][j] = 0; • line (3) is a simple op - takes O(1) • line (2) is a loop carried out n times • takes O(n *1) = O(n) • line (1) is a loop carried out n times • takes O(n * n) = O(n2)

  46. 3.4.2. Example: if statement (1) if (A[0][0] == 0) {(2) for(i=0; i < n; i++)_ (3) for (j = 0; j < n; j++)(4) A[i][j] = 0; }(5) else {(6) for (i=0; i < n; i++)(7) A[i][i] = 1; } continued

  47. The if-test takes O(1);the if block takes O(n2);the else block takes O(n). • Total running time: = O(1) + O( max(n2, n) ) = O(1) + O(n2) = O(n2) // using the summation rule

  48. 3.4.3. Time for a Binary Conversion void main() { int i;(1) scanf(“%d”, &i);(2) while (i > 0) {(3) putchar(‘0’ + i%2);(4) i = i/2; }(5) putchar(‘\n’); } continued

  49. Lines 1, 2, 3, 4, 5: each O(1) • Block of 3-4 is O(1) + O(1) = O(1) • While of 2-4 loops at most (log2 i)+1 times • total running time = O(1 * log2 i+1) = O(log2 i) • Block of 1-5: = O(1) + O(log2 i) + O(1) = O(log2 i) why?

  50. Why (log2 i)+1 ? • Assume i = 2k • Start 1st iteration, i = 2kStart 2nd iteration, i = 2k-1Start 3rd iteration, i = 2k-2Start kth iteration, i = 2k-(k-1) = 21 = 2Start k+1th iteration, i = 2k-k = 20 = 1 • the while will terminate after this iteration • Since 2k = i, so k = log2 i • So k+1, the no. of iterations, = (log2 i)+1

More Related