1 / 57

Chapter 2

Searching. Chapter 2. Linear List Searches Sequential Search The sentinel search, The probability search, The ordered search. Binary Search Hashed List Searches Collision Resolution. Outline. Linear List Searches. We study searches that work with arrays. Figure 2-1.

wlang
Download Presentation

Chapter 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Searching Chapter 2

  2. Linear List Searches Sequential Search The sentinel search, The probability search, The ordered search. Binary Search Hashed List Searches Collision Resolution Outline

  3. Linear List Searches • We study searches that work with arrays. Figure 2-1

  4. There are two basic searches for arrays The sequential search. It can be used to locate an item in any array. The binary search. It requires an ordered list. Linear List Searches

  5. The list is not ordered! We will use this technique only for small arrays. We start searching at the beginning of the list and continue until we find the target entity. Eighter we find it, or we reach the end of the list! Linear List SearchesSequential Search

  6. Locating data in unordered list. Figure 2-2

  7. Linear List SearchesSequential Search Algorithm • RETURN: The algorithm must be tell two things to calling algorithm; • Did it find the data ? • If it did, what is the index (address)?

  8. The searching algorithm requires five parameters: The list. An index to the last element in the list. The target. The address where the found element’s index location is to be stored. The address where the found or not found boolean is to be stored. Linear List SearchesSequential Search Algorithm

  9. algorithm SeqSearch (val list <array>, val last <index>, val target <keyType>, ref locn <index>) Locate the target in an unordered list of size elements. PRE list must contain at least one element. last is index to last element in the list. target contains the data to be located. locn is address of index in calling algorithm. POST if found – matching index stored in locn & found TRUE if not found – last stored in locn & found FALSE RETURN found <boolean> Sequential Search Algorithm

  10. looker = 1 loop (looker < last AND target not equal list(looker)) looker = looker + 1 locn = looker if (target equal list(looker)) found = true else found = false return found end SeqSearch Sequential Search Algorithm Big-O(n)

  11. There are three variations of sequential search algorithm: The sentinel search, The probability search, The ordered search. Variations On Sequential Search

  12. If the target will be found in the list, we can eliminate the test for the end of list. algorithm SentinelSearch (val list <array>, val last <index>, val target <keyType>, ref locn <index>) Locate the target in an unordered list of size elements. PRE list must contain element at the end for the sentinel. last is index to last element in the list. target contains the data to be located. locn is address of index in calling algorithm. POST if found – matching index stored in locn & found TRUE if not found – last stored in locn & found FALSE RETURN found <boolean> Sequential Search AlgorithmThe Sentinel Search

  13. list[last+1] = target looker = 1 loop (target not equal list(looker)) looker = looker + 1 if (looker <= last) found = true locn = looker else found = false locn = last return found end SentinelSearch Sequential Search AlgorithmThe Sentinel Search Big-O(n)

  14. algorithm ProbabilitySearch (val list <array>, val last <index>, val target <keyType>, ref locn <index>) Locate the target in a list ordered by the probability of each element being the target – most probable first, least probable last. PRE list must contain at least one element. last is index to last element in the list. target contains the data to be located. locn is address of index in calling algorithm. POST if found – matching index stored in locn & found TRUE and element moved up in priority. if not found – last stored in locn & found FALSE RETURN found <boolean> Sequential Search AlgorithmThe Probability Search

  15. looker = 1 loop (looker < last AND target not equal list[looker]) looker = looker + 1 if (target = list[looker]) found = true if (looker > 1) temp = list[looker-1] list[looker-1] = list[looker] list[looker] = temp looker = looker - 1 else found = false locn = looker return found end ProbabilitySearch Sequential Search AlgorithmThe Probability Search Big-O(n)

  16. If the list is small it can be more efficient to use a sequential search. We can stop search loop, when the target becomes less than or equal to the testing element of the list. algorithm OrderedListSearch (val list <array>, val last <index>, val target <keyType>, ref locn <index>) Locate the target in a list ordered on target. PRE list must contain at least one element. last is index to last element in the list. target contains the data to be located. locn is address of index in calling algorithm. POST if found – matching index stored in locn & found TRUE if not found – last stored in locn & found FALSE RETURN found <boolean> Sequential Search AlgorithmThe Ordered List Search

  17. if (target <= list[last]) looker = 1 loop (target > list[looker]) looker = looker + 1 else looker = last if (target equal list[looker] found = true else found = false locn = looker return found end OrderedListSearch Sequential Search AlgorithmThe Ordered List Search Big-O(n)

  18. The sequential search algorithm is very slow for the big lists. Big-O(n) If the list is ordered, we can use a more efficient algorithm called the binary search. Sequential Search

  19. Binary Search Test the data in the element at the middle of the array. If it is in the second half! If it is in the first half! Test the data in the element at the middle of the array. Test the data in the element at the middle of the array. If it is in the first half! If it is in the second half! If it is in the first half! If it is in the second half! . . . . . . . . . . . .

  20. mid=(first+last)/2 target > mid first = mid +1 target < mid last = mid -1 Figure 2-4

  21. first becomes larger than last! Figure 2-5

  22. algorithm BinarySearch(val list <array>, val last <index>, val target <keyType>, ref locn <index>) Search an ordered list using binary search. PRE list is ordered:it must contain at least one element. last is index to the largest element in the list. target is the value of element being sought. locn is address of index in calling algorithm. POST Found : locn assigned index to target element. found set true. Not found: locn = element below or above target. found set false. RETURN found <boolean> Binary Search Algorithm

  23. first = 1 last = end loop (first <= last) mid = (first + last)/2 if (target > list[mid]) first = mid + 1 (Look in upper half). else if (target < list[mid] last = mid – 1 (Look it lower halt). else first = last + 1 (Found equal: force exit) locn = mid if (target equal list[mid]) found = true else found = false Return end BinarySearch Binary Search Algorithm Big-O(log2n)

  24. Comparison of binary and sequential searches

  25. In an ideal search, we would know exactly where the data are and go directly there. We use a hashing algorithm to transform the key into the index of array, that contains the data we need to locate. Hashed List Searches

  26. It is a key-to-address transformation! Figure 2-6

  27. We call set of keys that hash to the same location in our list synonymns. • A collision is the event that occurs when a hashing algorithm produces an address for an insertion key and that address is already occupied. • Each calculation of an address and test for success is known as a probe. Figure 2-7

  28. Hashing Methods Figure 2-8

  29. The key is the address without any algorithmic manipulation. The data structure must contain an element for every possible key. It quarantees that there are no synonyms. We can use direct hashing very limited! Direct Hashing Method

  30. Direct Hashing Method Direct hashing of employee numbers. Figure 2-9

  31. The keys are consecutive and do not start from one. Example: A company have 100 employees, Employee numbers start from 1000 to 1100. Subtraction Hashing Method Ali Esin 1 Sema Metin 2 x=1001 1 2 x=1002 x – 1000 100 x=1100 99 Filiz Yılmaz 100

  32. The modulo-division method divides the key by the array size and uses remainder plus one for the address. address = key mod (listSize) + 1 If a list size selected a prime number, that produces fewer collisions than other list sizes. Modulo Division Hashing Method

  33. Modulo Division Hashing Method 121267 / 307 = 395 and remainder = 2 hash(121267)= 2 +1 = 3 We have 300 employees, and the first prime greater that 300 is 307!. Figure 2-10

  34. Selected digits are extracted from the key and used as the address. Example: 379452  394 121267  112 378845  388 526842  568 Digit Extraction Method

  35. The key is squared and the address selected from the middle of the squared number. The most obvious limitation of this method is the size of the key. Example: 9452 * 9452 = 89340304  3403 is the address. Or 379452  379 * 379 = 143641  364 Midsquare Hashing Method

  36. Folding Hashing Method Figure 2-11

  37. The key is used as the seed in a pseudorandom number generator and resulting random number then scaled in to a possiple address range using modulo division. Use a function such as: y = (ax + b (mod m))+1 x is the key value, a is coefficient, b is a constant. m is the count of the element in the list. y is the address. Pseudorandom Hashing Method

  38. y = (ax + b (mod m)) + 1  y = (17x + 7 (mod 307)) + 1 x = 121267 is the key value, a = 17 b = 7 m =307 y = ((( 17 * 121267) + 7) mod 307) + 1 y = ((2061539 +7) mod 307) + 1 y = 2061546 mod 307 + 1 y = 41 + 1 y = 42 Pseudorandom Hashing Method

  39. Rotation Hashing Method Rotation is often used in combination with folding and psuedorandom hashing. Figure 2-12

  40. Collision Resolution Methods All above methods of handling collision are independent of the hashing algorithm. Figure 2-13

  41. We define a full list, as a list in which all elements except one contain data. Rule: A hashed list should not be allowed to become more than %75 full! the number of filled elements in the list Load Factor = ------------------------------------------------------ x 100 total number of elements in the list k α = --------- x 100 the number of elements n Collision Resolution Concepts “Load Factor”

  42. Some hashing algorithms tend to couse data to group within the list. This is known as clustering. Clustering is created by collision. If the list contains a high degree of clustering, then the number of probes to locate an element grows and the processing efficiency of the list is reduced. Collision Resolution Concepts “Clustering”

  43. Clustering types are: Primary clustering; clustering around a home address in our list. Secondary clustering; the data are widely distributed across the whole list so that the list appears to be well distributed, however, the time to locate a requested element of data can become large. Collision Resolution Concepts “Clustering”

  44. When a collision occurs, the home area addresses are searched for an open or unoccupied element where the new data can be placed. We have four different method: Linear probe, Quadratic probe, Double hashing, Key offset. Collision Resolution MethodsOpen Addressing

  45. Open Addressing“Linear Probe” • When data cannot be stored in the home address, we resolve the collision by adding one to the current address. Advantage: • Simple implementation! • Data tend to remain near their home address. Disadvantages: • It tends to produce primary clustering. • The search algorithm may become more complex especially after data have been deleted! .

  46. Open Addressing“Linear Probe” 15532 / 307 = 50 and remainder = 2 hash(15532)= 2 +1 = 3 New address = 3+1 =4

  47. Open Addressing“Linear Probe” Figure 2-14

  48. Clustering can be eliminated by adding a value other than one to the current address. The increment is the collision probe number squared. For the first probe 12 For the second probe 22 For the third collision probe 32 ... Until we eighter find an empty element or we exhoust the possible elements. We use the modulo of the quadratic sum for the new address. Open Addressing“Quadratic Probe”

  49. Open Addressing “Quadratic Probe” Increase by two Fore each probe! + + +

  50. Open Addressing – Double Hashing“Pseudorandom Collision Resolution” In this methot, rather than using an arithmetic probe functions, the address is rehashed. y = ((ax + c) mod listSize) +1 y = ((3.2 +(-1) mod 307) +1 y = 6 Figure 2-15

More Related