600 likes | 735 Views
Discrete Math CS 2800. Prof. Bart Selman selman@cs.cornell.edu Module Algorithms and Growth Rates. The algorithm problem. Any legal input. Specification of all legal inputs. and. The algorithm. Specification of desired output as a function of the input. The desired output.
E N D
Discrete MathCS 2800 Prof. Bart Selman selman@cs.cornell.edu Module Algorithms and Growth Rates
The algorithm problem Any legal input Specification of all legal inputs and The algorithm Specification of desired output as a function of the input The desired output
Problem 1: Input: A list L, of integers Output: The sum of the integers on L Problem 3: Input: A road map of cities with distances attached to the road map, and two designated cities A and B Output: A description of the shortest path between A and B Problem 2: Input: Two texts A and B in English Output: The list of common words in both texts Examples of algorithmic problems
Problem 1: Input: A list L, of integers Output: The sum of the integers on L Instance of an algorithmic problemSize of an instance • An instance of an algorithmic problem is a concrete case of such a problem with specific input. The size of an instance is given by the size of its input. • Examples of instances: • An instance of problem 1: L= 2, 5, 26, 8, 170, 79, 1002 Size of instance length of list We use a “natural” measure of input size. Why generally ok? Strictly speaking we should count bits. Size of instance = |L| = 7
4 2 4 2 2 2 1 3 1 6 4 2 Problem 3: Input: A road map of cities with distances attached to the road map, and two designated cities A and B Output: A description of the shortest path between A and B 3 3 5 Examples of instances Size of instance Number of cities and roads A particular instance: Size of instance: 6 nodes 9 edges The size of an instance is given by the size of its input.
Algorithm • Definition: • An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. • In general we describe algorithms using pseudocode: i.e., a language that is an intermediate step between an English language description of an algorithm and an implementation of this algorithm in a programming language
Properties of an Algorithm • Input: an algorithm has input values from a specified set. • Output: for each set of input values an algorithm produces output values from a specified set. The output values are the solution of the problem. • Definiteness: The steps of an algorithm must be defined precisely. • Correctness: An algorithm should produce the correct output values fro each set of input values. • Finiteness: an algorithm should produce the desired output after a finite (but perhaps large) number of steps for any input in the set. • Effectiveness: It must be possible to perform each step of an algorithm exactly and in a finite amount of time. • Generality: the procedure should be applicable for all the problems of the desired from, not just for a particular set of input values. Distinction between: “problem” and “problem instance” Quite confusing for folks outside CS. Alg. should work for all instances!
Algorithm: Finding the Maximum Element in a Finite Sequence • proceduremax(a1,a2,…, an: integers) • max := a1 • for i := 2 ton • ifmax < aithenmax := ai • {max is the largest element}
Computer Programming Programmer (human) Algorithm compilation programming Compiler (software) Program in high-level language (C, Java, etc) Equivalent program in assembly language Equivalent program in machine code computer execution
Searching Algorithms • Searching problems: • the problem of locating an element in an ordered list. • Example: searching for a word in a dictionary.
Algorithm:The Linear Search Algorithm • procedurelinear search( x: integer, a1,a2,…, an: distinct integers) • i := 1 • while (i≤ n and x ai ) • i := i +1 • ifi < nthenlocation := i • else location := 0 • {location is the subscript of the term that equals x, or is 0 if x is not found}
Binary search • To search for 19 in the list • 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 • First split: • 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 • Second split • 12 13 15 16 18 19 20 22 • Third Split • 18 19 20 22 • 19 is located as 14th item.
Algorithm:The Binary Search Algorithm • procedurebinary search( x: integer, a1,a2,…, an: increasing integers) • i := 1 {i is left endpoint of search interval} • j := n {j is right endpoint of search interval} • whilei< j • begin • m := (i +j)/2 • ifx > amthen i := m + 1 • else j := m • end • ifx = aithen location := i • else location := 0 • {location is the subscript of the term that equals x, or is 0 if x is not found}
Just because we know how to solve a given problem – we have an algorithm - that does not mean that the problem can be solved. • The procedure (algorithm) may be so inefficientthat it would not be possible to solve the problem within a useful period of time. • So we would like to have an idea in terms of the “complexity” of our algorithm.
Complexity of Algorithms • The complexity of an algorithm is the number of steps that it takes to transform the input data into the desired output. • Each simple operation (+,-,*,/,=,if,etc) and each memory access corresponds to a step.(*) • The complexity of an algorithm is a function of the size of the input (or size of the instance). We’ll denote the complexity of algorithm A by CA(n), where n is the size of the input. What does this mean for the complexity of, say, chess? Complexity: CA(n) = O(1) Two issues: (1) fixed input size (2) Memory access just 1 step So, model/defn. not always “useful”! (*) This model is a simplification but still valid to give us a good idea of the complexity of algorithms.
Example: Insertion Sort From: Introduction to Algorithms Cormen et al
Different notions of complexity Worst case complexity of an algorithm A – the maximum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. It provides an upper bound for an algorithm. The worst that can happen given the most difficult instance – the pessimistic view. Best case complexity of an algorithm A -the minimum number of computational steps required for the execution of Algorithm A, over all the inputs of the same size, s. The most optimisticview of an algorithm– it tells us the least work a particular algorithm could possibly get away with for some one input of a fixed size – we have the chance to pick the easiest input of a given size. Linear search: Worst cost? Best cost?
Average case complexity of an algorithm A - i.e.,the average amount of resources the algorithm consumes assuming some plausible frequency of occurrence of each input. • Figuring out the average cost is much more difficult than figuring out either the worst-cost or best-cost e.g., we have to assume a given probability distribution for the types of inputs we get. Practical difficulty: What is the distribution of “real-world” problem instances?
In general this is the notion that we use to characterize the complexity of algorithms Different notions of complexity We perform upper bound analysis on algorithms.
Algorithm “Good Morning” • For I = 1 to n • For J = I+1 to n • ShakeHands(student(I), student(J)) Running time of “Good Morning” Time = (# of HS) x (time/HS) + some overhead We want an expression for T(n), running time of “Good Morning” on input of size n.
2 Growth Rates • Algorithm “Good Morning” • For I = 1 to n • For J = I+1 to n • ShakeHands(student(I), student(J)) J How many handshakes? n2 - n I
Growth Rates • Algorithm “Good Morning” • For I = 1 to n • For J = I+1 to n • ShakeHands(student(I), student(J)) T(n) = s(n2- n)/2 + t Where s is time for one HS, and t is time for getting organized. But do we always characterize the complexity of algorithms with such a detail? What is the most important aspect that we care about?
Comparing algorithms wrt complexity • Let us consider two algorithms A1 and A2, • with complexities: • CA1(n) = 0.5 n2 • CA2(n) = 5 n • Which one is has larger complexity?
CA1(n) = 0.5 n2 >CA2(n) = 5 n for n >10 When we look at the complexity of algorithms we think asymptotically – i.e., we compare two algorithms as the problem sizes tend to infinity!
Growth Rates • In general we only worry about growth rates because: • Our main objective is to analyze the cost performance of algorithms asymptotically. • (reasonable in part because computers get faster and faster every year.) • Another obstacle to having the exact cost of algorithms is that sometimes the algorithms are quite complicated to analyze. • When analyzing an algorithm we are not that interested in the exact time the algorithm takes to run – often we only want to compare two algorithms for the same problem – the thing that makes one algorithm more desirable than another is its growth rate relative to the other algorithm’s growth rate.
Growth of Rates • Algorithm analysis is concerned with: • Type of function that describes run time (we ignore constant factors since different machines have different speed/cycle) • Large values of n
We say “f(n) is big O of g(n)” Recipe for proving f(n) = O(g(n)): find a constant C and k (called witnesses to the fact that f(x) is O(g(x))) so that the inequality holds. Will be applied to running time, so you’ll usually consider T(n) (>= 0) Growth of functions • Important definition: • For functions f and g from • the set of integers to the set of real numbers we say • f(x) is O(g(x)) • to denote • C,k so that n>k, |f(n)| C |g(n)| Note: when C and k are found, there are infinitely many pairs of witnesses. Sometimes it is also said f(x) = O(g(x)), even though this is not a real equality. Also, in general we perform this analysis for runtime, therefore the functions are positive.
k f(x) is O(g(x))
x2 + 2x + 1 is O(x2) C = 4 k = 1 also C = 3 k = 2
Note: • When f(x) is O(g(x)), and h(x) is a function that has larger absolute values than g(x) does for sufficiently large values of x, it follows that f(x) is O(h(x)). In other words, the function g(x) in the relationship can be replace by a function with larger absolute values. This can be seen given that: • |f(x)| ≤ C|g(x)| if x > k • and if (h(x)| > |g(x)| for all x > k, then • |f(x)| ≤ C|h(x)| if x > k • Therefore f(x) is O(h(x))
There’s k There’s C Growth of functions (examples) • f(x) = O(g(x)) • iff • c,k so that x>k, |f(x)| Cg(x)| 3n = O(15n) since n>0, 3n 1 15n
The complexity of A2 is of lower order than that of A1. While A1 grows quadratically O(n2) A1 only grows linearly O(n).
1 a) Yes, and I can prove it. b) Yes, but I can’t prove it. c) No, x=1/2 implies x2 > x3 d) No, but I can’t prove it. Growth of functions (examples) • f(x) = O(g(x)) • iff • c,k so that x>k, |f(x)| C|g(x)| x2 = O(x3) ? Yes, since x> __, x2 x3 C = 1 k = 1
0 1000 Growth of functions (examples) • f(x) = O(g(x)) • iff • c,k so that x>k, |f(x)| C|g(x)| 1000x2 is O(x2) since x> __, 1000x2 ____ ·x2 C = 1000 k = 0
100x 100x2 100 100x2 k = 1, C = 20100 Growth of functions (examples) • f(x) = O(g(x)) • iff • c,k so that x>k, |f(x)| C|g(x)| Prove that x2 + 100x + 100 = O((1/100)x2) x2 + 100x + 100 201x2 when x > 1 20100·(1/100)x2
Nothing works for k Try c = 10 k = 200, c = 11 Similar problem, different technique. Try c = 11 200 Growth of functions (examples) Prove that 5x + 100 = O(x/2) Need x> ___, 5x + 100 ___ · x/2 x> ___, 5x + 100 10 · x/2 x> ___, 5x + 100 5x + x/2 x> __ _, 100 x/2
Theorem 1 Then
Proof: • Assume the triangle inequality that states: |x| + |y| |x + y| (where did we use this?)
Estimating Functions • Example1: • Estimate the sum of the first n positive integers
Estimating Functions Example2: Estimate f(n) = n! and log n!
Constant time Linear time Quadratic time Exponential time Growth of functions • Guidelines: • In general, only the largest term in a sum matters. a0xn + a1xn-1 + … + an-1x1 + anx0 = O(xn) • n dominates lg n. n5lg n = O(n6) • List of common functions in increasing O() order: 1 n (n lg n) n2 n3 … 2n n!
c = c1+c2, k = max{k1,k2} Combination of Growth of functions • Theorem: • If f1(x) = O(g1(x)) and f2(x)=O(g2(x)), then f1(x) + f2(x) is O(max{|g1(x)|,|g2(x)|}) Proof: Let h(x) = max{|g1(x)|,|g2(x)|} Need to find constants c and k so that x>k, |f1(x) + f2(x)| c |h(x)| We know |f1(x) |c1| g1(x)|and |f2(x)| c2 |g2(x)| and using triangle inequality |f1(x) + f2(x)| ≤ |f1(x)| + |f2(x)| And |f1(x)| + |f2(x)| c1|g1(x)| + c2|g2(x)| c1|h(x)| + c2|h(x) | = (c1+c2)•h(x)
Growth of functions – two more theorems • Theorem: • If f1(x) = O(g1(x)) and f2(x)=O(g2(x)), then f1(x)·f2(x) = O(g1(x)·g2(x)) Theorem: If f1(x) = O(g (x)) and f2(x)=O(g (x)), then (f1+f2)(x) = O(g (x))
“g is big-omega of f” “lower bound” c’ = 1/c “f is big-theta of g” When we write f=O(g), it is like f g When we write f= (g), it is like f g When we write f= (g), it is like f = g. Growth of functions - two definitions • If f(x) = O(g(x)) then we write g(x) = (f(x)). What does this mean? If c,k so that x>k, f(x) c·g(x), then: k,c’ so that x>k, g(x) c’f(x) If f(x) = O(g(x)), and f(x) = (g(x)), then f(x) = (x)
“f is little-o of g” Growth of functions - other estimates • For functions f and g, f = o(g) if • c>0 k so that n>k, f(n) c·g(n), What does this mean? No matter how tiny c is, cg eventually dominates f. Example: Show that n2 = o(n2log n) Proof foreshadowing: find a k (possibly in terms of c) that makes the inequality hold.