310 likes | 455 Views
Discrete Structures CISC 2315. Growth of Functions. Introduction. Once an algorithm is given for a problem and decided to be correct, it is important to determine how much in the way of resources, such as time or space , that the algorithm will require.
E N D
Discrete Structures CISC 2315 Growth of Functions
Introduction • Once an algorithm is given for a problem and decided to be correct, it is important to determine how much in the way of resources, such as timeor space, that the algorithm will require. • We focus mainly on timein this course. • Such an analysis will often allow us to improve our algorithms.
Running Time Analysis N Data Items 1000101010100011111000110001110101010101010101 0010001010101000100000000000011110101000111010 0010101010101010101010101111111100000011001011 Returns in time T1(N) Algorithm 1 N Data Items 1000101010100011111000110001110101010101010101 0010001010101000100000000000011110101000111010 0010101010101010101010101111111100000011001011 Returns in time T2(N) Algorithm 2
Analysis of Running Time(which algorithm is better?) Algorithm 1 Running Time T(N) Algorithm 2 Number of Input Items N
Comparing Algorithms • We need a theoretical framework upon which we can compare algorithms. • The idea is to establish a relative order among different algorithms, in terms of their relative rates of growth. The rates of growth are expressed as functions, which are generally in terms of the number/size of inputs N. • We also want to ignore details, e.g., the rate of growth is N2 rather than 3N2 – 5N + 2. NOTE: The text uses the variable x. We use N here.
Big-O Notation: • Definition: • This says that function T(N) grows at a rate no faster than f(N); thus cf(N) is an upper bound on T(N). big-O NOTE: We use |T(N)| < c|f(N)| if the functions could be negative. In this class, we will assume they are positive unless stated otherwise.
Big-O Upper Bound c f(N) Running Time T(N) T(N) Number of Input Items N
Big-O Example • Prove that • Since • Then • We could also prove that but the first upper bound is tighter (lower). • Note that Why? How do you show something is notO(something)?
Another Big-O Example • Prove that • Since • Then • We could also prove that but the first bound is tighter. • Note that Why?
Why Big-O? • It gets very complicated comparing the time complexity of algorithms when you have all the details. Big-O gets rid of the details and focuses on the most important part of the comparison. • Note that Big-O is a worst-case analysis.
Same-Order Functions • Let f(N) = N2 + 2N + 1 • Let g(N) = N2 • g(N) = O(f(N)) because N2 < N2 + 2N + 1 • f(N) = O(g(N)) because: • N2 + 2N + 1 < N2 + 2N2 + N2 = 4N2, which is O(N2) with c = 4 and n0 = 1. • In this case we say that f(N) and g(N) are of the same order.
Simplifying the Big-O • Theorem 1 (in text): • Let f(N) = anNn + an-1Nn-1 + … + a 1N + a 0 • Then f(N) is O(Nn).
Another Big-O Example • Recall that N! = N*(N-1)*…*3*2*1 when N is a positive integer > 0, and 0! = 1. • Then N! = N*(N-1)*…*3*2*1 < N*N*N* … *N = NN • Therefore, N! = O(NN). But this is not all we know… • Taking log of both sides, log N! < log NN = N log N. Therefore log N! is O(N log N). (c=1, n0 = 2) Recall that we assume logs are base 2.
Another Big-O Example • In Section 3.2 it will be shown that N < 2N. • Taking the log of both sides, log N < N. • Therefore log N = O(N). (c=1 and n0 = 1)
Complexity terminology • O(1) Constant complexity • O(log N) Logarithmic complexity • O(N) Linear complexity • O(N log N) N log N complexity • O(Nb) Polynomial complexity • O(bN), b > 1 Exponential complexity • O(N!) Factorial complexity
Growth of Combinations of Functions • Sum Rule: Suppose f1(N) is O(g1(N)) and f2(N) is O(g2(N)). Then (f1 + f2 )(N) is O(max(g1(N), g2(N))). • Product Rule: Suppose that f1(N) is O(g1(N)) and f2(N) is O(g2(N)). Then (f1f2) (N) is O(g1(N)g2(N)).
Growth of Combinations of Functions Subprocedure 2 Subprocedure 1 O(g2(N)) O(g1(N)) * What is the big-O time complexity of running the two subprocedures sequentially? Theorem 2: If f1(N) is O(g1(N)) and f2(N) is O(g2(N)), then (f1 + f2)(N) = O(max(g1(N),g2(N)).
What about M not equal to N? Growth of Combinations of Functions Subprocedure 2 Subprocedure 1 O(g2(M)) O(g1(N)) * What is the big-O time complexity of running the two subprocedures sequentially? Theorem 2: If f1(N) is O(g1(N)) and f2(M) is O(g2(M)), then f1(N)+ f2(M) = O(g1(N) + g2(M)).
Growth of Combinations of Functions 2N N O(g2(N)) for i=1 to 2N do O(g1(N)) 1 for j=1 to N do 1 1 N steps * 2N steps = 2N2 steps aij : = 1 * What is the big-O time complexity for nested loops? Theorem 3: If f1(N) is O(g1(N)) and f2(N) is O(g2(N)), then (f1 * f2)(N) = O(g1(N) * g2(N)).
Growth of Combinations of Functions: Example 1 • Give a big-O estimate for f(N) = 3N log(N!) + (N2 + 3) logN, where N is a positive integer. • Solution: • First estimate 3N log(N!). Since we know log(N!) is O(N log N), and 3N is O(N), then from the Product Rule we conclude 3N log(N!) = O(N2 log N). • Next estimate (N2 + 3) log N. Since (N2 + 3) < 2N2 when N > 2, N2 + 3 is O(N2). From the Product Rule, we have (N2 + 3) log N = O(N2 log N). • Using the Sum Rule to combine these estimates, f(N) = 3N log(N!) + (N2 + 3) log N is O(N2 log N).
Growth of Combinations of Functions: Example 2 • Give a big-O estimate for f(N) = (N + 1) log(N2 + 1) + 3N2. • Solution: • First estimate (N + 1) log(N2 + 1) . We know (N + 1) is O(N). Also, N2 + 1 < 2N2 when N > 1. Therefore: • log(N2 + 1) < log(2N2) = log2 + logN2 = log2 + 2logN < 3logN, if N > 2. We conclude log(N2 + 1) = O(logN). • From the Product Rule, we have (N + 1)log(N2 + 1) = O(N logN). Also, we know 3N2 = O(N2). • Using the Sum Rule to combine these estimates, f(N) = O(max(N logN, N2). Since N log N < N2 for N >1, f(N) = O(N2).
Big-Omega Notation • Definition: • This says that function T(N) grows at a rate no slower than f(N); thus cf(N) is a lower bound on T(N).
Big-Omega Lower Bound T(N) Running Time T(N) f(N) Number of Input Items N
Big-Omega Example • Prove that • Since • Then • We could also prove that but the first lower bound is tighter (higher). • Note that
Another Big-Omega Example • Prove that • Since • Then • We could also prove that but the first bound is tighter. • Note that
Big-Theta Notation • Definition: • This says that function T(N) grows at the same rate as f(N). • Put another way: We say that T(N) is oforder f(N).
Big-Theta Example • Show that 3N2 + 8N logN is θ(N2). • 0 < 8N log N < 8N2, and so 3N2 + 8N log N < 11N2 for N > 1. • Therefore 3N2 + 8N logN is O(N2). • Clearly 3N2 + 8N logN is Ω(N2). • We conclude that 3N2 + 8N logN is θ(N2).
A Hierarchy of Growth Rates If f(N) is O(x) for x one of the above, then it is O(y) for any y > x in the above ordering. But the higher bounds are not tight. We prefer tighter bounds. Note that if the hierarchy states that x < y, then obviously for any expression z > 0, z*x < z*y.
Complexity of Algorithms vs Problems • We have been talking about polynomial, linear, logarithmic, or exponential time complexity of algorithms. • But we can also talk about the time complexity of problems. Example decision problems: • Let a, b, and c be positive integers. Is there a positive integer x < c such that x2 = a (mod b)? • Does there exist a truth assignment for all variables that can satisfy a given logical expression?
Complexity of Problems • A problem that can be solved with a deterministic polynomial (or better) worst-case time complexity algorithm is called tractable. Problems that are not tractable are intractable. • P is the set of all problems solvable in polynomial time (tractable problems). • NP is the set of all problems not solvable by any known deterministic polynomial time algorithm. But their solution can be checked in polynomial time. P = NP ?
Complexity of Problems (cont’d) • Unsolvable problem: A problem that cannot be solved by any algorithm. Example: • Halting Problem input program ? Will program halt on input? If we try running the program on the input, and it keeps running, how do we know if it will ever stop?