1 / 33

Discrete Structure

Discrete Structure. Li Tak Sing( 李德成 ). Section 4.3.3 Well-founded orders. Well-founded A poset is said to be well-founded if every descending chain of elements is finite. In this case, the partial order is called a well-founded order. Examples of well-founded poset. The < relations of N.

teige
Download Presentation

Discrete Structure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discrete Structure Li Tak Sing(李德成)

  2. Section 4.3.3 Well-founded orders • Well-founded • A poset is said to be well-founded if every descending chain of elements is finite. In this case, the partial order is called a well-founded order.

  3. Examples of well-founded poset • The < relations of N. • 100>90>78>.. This chain must end with finite terms. • <{a,b}*, longer> • abca>abc>aa

  4. Examples of posets that are not well-founded. • The < relation of Z. • 30>28>21>2>-4>...... • The < relation of R+ • 30.2>29.2>20.4>15.8>....>0.004>0.00058>....

  5. The minimal Element Property • If A is a well-founded set, then every subset of A has a minimal element. Conversely, if every nonempty subset of A has a minimal element, then A is well-founded.

  6. Lexicograhic or ordering of tuples

  7. Example

  8. Example

  9. Section 4.4 Inductive Proof • A basis for Mathematical induction • Let mZ and W={m,m+1,m+2,...}. Let S be a nonempty subset of W such that the following two conditions hold. • mS. • Whenever kS, then k+1 S. • Then S=w.

  10. Proof. • Assume SW. Then W-S is non-empty. Therefore it has a least element x because every subset of W has a least element. • xW-S. Now, m S, therefore x m. Since m is the least element in W, x>m, i.e., x-1m. Since x is the least element in W-S, then x-1 should not be in W-S, so it must be in S. • So x-1 S. By the second rule, x S. So there is a contradiction as x cannot be both in S and W-S. • So S=W.

  11. The principle of mathematical induction • Let mZ. P(n) is true for all nm, if • P(m) is true • P(k) is true  P(k+1) is true for k m.

  12. Examples • Prove the followings. • 2+4+...+2n=n(n+1). • a1+a2+...+an=n(a1+a2)/2, where a1, a2,... is an arithmetic progression series.

  13. Chapter 5 Analysis Techniques • We need to see if an algorithm is efficient or not. • Least time and least space. Which one is more important? Generally, time is a more important factor because you can buy space but not time.

  14. Worst-case running time • The running time is measured by counting the number of operations instead of doing the real timing during the execution of a program. • The number of operations performed by an algorithm usually depends on the size or structure of the input. • Some algorithm also depends on the structure of the input. For example some sorting algorithms perform poorly when the original list is already sorted.

  15. Worst-case running time • An input of size n is a worst-case input if, when compared to all other inputs of size n, it causes an algorithm A to execute the largest number of operations. • For any input I we'll denote its size by size(I), and we'll let time(I) denote the number of operations executed by A on I. Then, the worst-case function for A is defined as follows:WA(n)=max{time(I)|I is an input and size(I)=n}

  16. Optimal in the worst case • An algorithm A is optimal in the worst case for problem P, if for any algorithm B that exits, or ever will exist, the following relationship holds: WA(n)WB(n) for all n>0

  17. Steps to find an algorithm that is optimal • (Find an algorithm) Find or design an algorithm A to solve P. Then do an analysis of A to find the worst-case function WA. • (Find a lower bound) Find a function F such that F(n)WB for all n>0 and for all alogrithms B that solve P. • Compare F and WA. If F= WA, then A is optimal in the worst case.

  18. Steps to find an algorithm that is optimal • Suppose we know that FWA. This means that F(n)<WA(n) for some n. In this case there are two possible courses of action to consider: • Try to build a new algorithm C such that WC(n)WA(n) for all n>0 • Try to find a new function G such that F(n) G(n) WB(n) for all n>0 and for all algorithms B that solve P.

  19. Matrix Multiplication If two nn matrices are multiplied together, n3 multiplications and n2(n-1) additions are needed. However, there are special algorithm that can reduce the number of multiplication.

  20. In the last method, only 7 multiplications are needed. • Multiplication of larger-size matrices is broken down into multiplying many 2 by 2 matrices. So the number of multiplication operations becomes less than n3. • The known lower bound for the number of multiplication operations needed to multiply two n by n matrices is n2. • Strassen showed how to multiply two matrices with n2.81 multiplications. The number 2.81 is related to log27.

  21. Finding the minimum • Problem: to find the minimum number in an unsorted list of a numbers. • We'll count the number of comparison operations that an algorithm makes between elements of the list. • To find the minimum number in a list of n numbers, the minimum number must be compared with the other n-1 numbers. Therefore, n-1 is a lower bound on the number of comparisons needed to find the minimum number in a list of n numbers.

  22. Finding the minimum • The following algorithm is optimal because the operation  is executed exactly n-1 times.m:=a[1];for i:=2 to n do m:=if ma[i] then m else a[i]od

  23. Decision trees • A decision tree for an algorithm is a tree whose nodes represent decision points in the algorithm and whose leaves represent possible outcomes. • Decision trees can be useful in trying to construct an algorithm or trying to find properties of a decision tree.

  24. Decision trees • If an algorithm makes decisions based on the comparison of two objects, then it can be represented by a binary decision tree. Each nonleaf node in the tree represents a pair of objects to be compared by the algorithm, and each branch from that node represents a path taken by the algorithm based on the comparison. Each leaf can represent an outcome of the algorithm.

  25. Lower bounds for decision tree algorithms • Suppose that a problem has n possible outcomes and it can be solved by a binary decision tree algorithm. What is the best bindary decision tree algorithm? We may not know the answer, but we can find a lower bound for the depth of any binary decision tree to solve the problem. Since the program has n possible outcomes, it follows that any binary decision tree algorithm to solve the problem must have at least n leaves, one for each of the n possible outcomes. Recall that the number of leaves in a binary tree of depth d is at most 2d.

  26. Lower bounds for decision tree algorithms • So if d is the depth of a binary decision tree to solve a problem with n possible outcomes, then we must have n2d. We can solve this inequality for d by taking log2 of both sides to obtain log2n d. Since d is a natural number, it follows that log2nd

  27. Ternary decision tree • We can do the same analysis for ternary decision trees. The number of leaves in a ternary tree of depth d is at most 3d. If d is the depth of a ternary decision tree to solve a problem with n possible outcomes, then we must have n3d. Or: log3nd

  28. Binary Search • Suppose we search a sorted list in a binary fashion. That is, we check the middle element of the list to see whether it's the key we are looking for. If not, then we perform the same operation on either the left half or the right half of the list, depending on the value of the key. This algorithm has a nice representation as a decision tree. For example, suppose we have the following sorted list of 15 numbers:x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15,

  29. Decision tree for binary search

  30. Binary search • Suppose we're given a number key K, and we must find whether it is in the list. The decision tree for a binary search of the list has the number x8 at its root. This represents the comparison of K with x8. If K=x8, then we are successful in one comparison. If K<x8, then we go to the left child of x8; otherwise we go to the right child of x8. The result is a ternary decision tree in which the leaves are labeled with either S, for successful search, or U, for unsuccessful search.

  31. Binary search • Since the depth of the tree is 4, it follows that there will be four comparisons in the worst case to find whether K is in the list. Is this an optimal algorithm? To see that the answer is yes, we can observe that there are 31 possible outcomes for the given program: 15 leaves labeled with S to represent successful searches; and 16 leaves labeled with U to represent the gaps where K<x1, xi<K<xi+1 for 1i<15, and x15<K. Therefore, a worst-case lower bound for the number of comparisons is log331=4. Therefore the hyphen algorithm is optimal.

  32. Examples • A solution to a problem has x possible outcomes. An algorithm to solve the problem has a binary decision tree of depth 5. • Draw a picture of the decision tree for an optimal algorithm to find the maximum number in the list x1, x2, x3 and x4.

More Related