440 likes | 574 Views
Overview. Overview(1). O(1) access to files Variation of the relative file Record number for a record is not arbitrary; rather, it is obtained by a hashing function H applied to the primary key, H(key)
E N D
Overview Overview(1) • O(1) access to files • Variation of the relative file • Record number for a record is not arbitrary; rather, it is obtained by a hashing function H applied to the primary key, H(key) • Record numbers generated should be uniformly and randomly distributed such that 0 < H(key) < N Data Structures
Overview Overview(2) • A hash function is like a block box that produces an address every time you drop in a key • All parts of the key should be used by the hashing function H so that a lot of records with similar keys do not all hash to the same location • Given two random keys X, Y and N slots, the probability H(X)=H(Y) is 1/N; in this case, X and Y are called synonyms and a collision occurs Data Structures
11.1 Introduction Introduction • Hash function : h(k) • Transforms a key K into an address • Hash vs other index • Sequential search : O(N) • Binary search : O(log2N) • B(B+) Tree index : O(logkN) where k records in an index node • Hash : O(1) Data Structures
11.1 Introduction A Simple Hashing Scheme(1) ASCII Code for First Two Letters Home Address Name Product 66 X 65 = 4,290 4,290 BALL 66 65 76 X 96 = 6,004 76 96 LOWELL 6,004 84 X 82 = 6,888 84 82 TREE 6,888 Data Structures
0 1 2 3 4 LOWELL. . . 5 6 h(K) ... ... 11.1 Introduction A Simple Hashing Scheme(2) Record Address key K=LOWELL Address 4 LOWELL’s home address Data Structures
11.1 Introduction Idea behind Hash-based Files • Record with hash key i is stored in node i • All record with hash key h are stored in node h • Primary blocks of data level nodes are stored sequentially • Contents of the root node can be expressed by a simple function: Address of data level node for record with primary key k = address of node 0 + H(k) • In literature on hash-based files, primary blocks of data level nodes are called buckets Data Structures
root node 1 3 5 0 2 4 6 55 20 45 70 61 40 10 3 124 50 15 1 51 30 60 11 15 14 57 11.1 Introduction e.g. Hash-based File Data Structures
11.1 Introduction Hashing(1) • Hashing Functions : Consider a primary key consisting of a string of 12 letters and a file with 100,000 slots. Since 2612 >> 105, So synonyms (collisions) are inevitable! Data Structures
11.1 Introduction Hashing(2) • Possible means of hashing • First, because 12 characters = 12 bytes = 3(32-bit) words, partition into 3 words and perform folding as follows : either • (a) add modulo 232, or • (b) combine using exclusive OR, or • (c) invent your own method • Next, let R = V mod N R => Record-number V => Value obtained in the above N => Number of Slots so 0< R < N • If N has many small factors, a poor distribution leading to many collisions can occur. Normally, N is chosen to be a primary number Data Structures
fk 1 N 1 N M K x ~ 1- ~ ek*k! 11.1 Introduction Hashing(3) • If M = number of records, N = number of available slots, P(k) = probability of k records hashing to the same slot • then P(k) = where f is the loading factor M/N • As f --> 1, we know that p(0)->1/e and p(1) -> 1/e. The other (1-1/e) of the records must hash into (1-2/e) of the slots, for an average of 2.4 slot. So many synonyms!! Data Structures
11.1 Introduction Collision • Collision • Situation in which a record is hashed to an address that does not have sufficient room to store the record • Perfect hashing : impossible! • Different key, same hash value (Different record, same address) • Solutions • Spread out the records • Use extra memory • Put more than one record at a single address Data Structures
11.2 A Simple Hashing Algorithm A Simple Hashing Algorithm Step 1. Represent the key in numerical form • If the key is a string : take the ASCII code • If the key is a number : nothing to be done e.g.. LOWELL = 76 79 87 69 76 76 32 32 32 32 32 32 L O W E L L ( 6 blanks ) Data Structures
11.2 A Simple Hashing Algorithm Step 2 . Fold and Add • Fold 76 79 | 87 69 | 76 76 | 32 32 | 32 32 | 32 32 • Add parts into one integer (Suppose we use 15bit integer expression, 32767 is limit) 7679 + 8769 + 7676 + 3232 + 3232 + 3232 = 33820 > 32767 (overflow!) • Largest addend : 9090 ( ‘ZZ’ ) • Largest allowable result : 32767 - 9090 = 19937 • Ensure no intermediate sum exceeds using ‘mod’ ( 7679 + 8769 ) mod 19937 = 16448 (16448 + 7676 ) mod 19937 = 4187 ( 4187 + 3232 ) mod 19937 = 7419 ( 7419 + 3232 ) mod 19937 = 10651 ( 10651 + 3232) mod 19937 = 13883 Data Structures
11.2 A Simple Hashing Algorithm Step 3. Divide by size of the address space • a = s mod n • a : home address • s : the sum produced in step 2 • n : the number of addresses in the file • e.g.. a = 13883 mod 100 = 83(s = 13883, n = 100) • A prime number is usually used for the divisor because primes tend to distribute remainders much more uniformly than do nonprimes • So, we chose a prime number as close as possible to the desired size of the address space (eg, a file with 75 records, a good choice for n is 101, then the file will become 74.3 percent full Data Structures
Acceptable Best Worst Record Address Record Address Record Address 1 1 1 A A A 2 2 2 B B B 3 3 3 4 4 4 C C C 5 5 5 D D D 6 6 6 E E E 7 7 7 F F F 8 8 8 9 9 9 G G G 10 10 10 11.3 Hashing Functions and Record Distributions Hashing Functions and Record Distributions Distributing records among address Data Structures
11.3 Hashing Functions and Record Distributions Some other hashing methods • Better-than-random • Examine keys for a pattern • Fold parts of the key • Divide the key by a number • When the better-than-random methods do not work ----- randomize! • Square the key and take the middle • Radix transformation Data Structures
# of records r • Packing Density = = # of spaces N 11.4 How Much Extra Memory Should Be Used? How Much Extra Memory Should Be Used? • The more records are packed, • the more likely a collision will occur Data Structures
(r/N)xe-r/N p(x) = (poisson distribution) x! 11.4 How Much Extra Memory Should Be Used? Poisson Distribution N = the number of available addresses r = the number of records to be stored x = the number of records assigned to a given address p(x) : the probability that a given address will have x records assigned to it after the hashing function has been applied to all n records ( x records 가 collision 할 확률) Data Structures
1 X NP(2) + 2 X NP(3) + 3 X NP(4) ... X 100 N 11.4 How Much Extra Memory Should Be Used? Predicting Collisions for Different Packing Densities • # of addresses no record assigned : N X P(0) • # of addresses one record assigned : N X P(1) • # of addresses more than two assigned : N X [P(2) + P(3) + P(4) + ...] • # of overflows : 1 X NP(2) + 2 X NP(3) +... • Percentage of overflow records : Data Structures
Packing Density(%) Synonym as % of records 4.8 9.4 13.6 17.6 21.4 24.8 28.1 31.2 34.1 36.8 10 20 30 40 50 60 70 80 90 100 11.4 How Much Extra Memory Should Be Used? The larger space, the less overflows N addresses r records r/N Data Structures
11.5 Collision Resolution by Progressive Overflow Collision Resolution by Progressive Overflow • Progressive overflow (= linear probing) • Insert a new record • 1. Take home address if empty • 2. Otherwise, next several addresses are searched in sequence, until an empty one is found • 3. If no more space -- wrapping around Data Structures
0 1 .... .... Novak. . . Rosen. . . Jasper. . . Morely. . . 5 6 7 8 9 Key York York’s home address (busy) 2nd try (busy) Address Hash Routine 3rd try (busy) 6 4th try (open) York’s actual address .... .... 11.5 Collision Resolution by Progressive Overflow Progressive Overflow(Cont’d) Data Structures
0 1 2 Key .... .... Blue 97 98 99 Hash Routine Address 99 Jello... .... .... Wrapping around 11.5 Collision Resolution by Progressive Overflow Progressive Overflow(Cont'd) Data Structures
11.5 Collision Resolution by Progressive Overflow Progressive Overflow(Cont'd) • Search a record with a hash function value k: from home address k, look at successive records, until Found, or An open address is encountered • Worst case • When the record does not exist and the file is full Data Structures
Search length Home Address Actual Address ... Adams. . . Bates. . . Cole. . . Dean. . . Evans. . . 1 1 2 2 5 20 21 21 22 20 20 21 22 23 24 25 ... ... total search length Average search length = = 2.2 total # of records 11.5 Collision Resolution by Progressive Overflow Progressive Overflow(Cont'd) - Search length : # of accesses required to retrieve a record (from secondary memory) Data Structures
5 4 Average search length 3 2 1 20 60 40 100 80 Packing density 11.5 Collision Resolution by Progressive Overflow Progressive Overflow(Cont'd) • With perfect hashing function : average search length = 1 • Average search length of no greater than 2.0 are generally considered acceptable Data Structures
Home Address 30 30 32 33 33 33 33 Key Green Hall Jerk King Land Marx Nutt Bucket contents Bucket address Green ... Hall ... Jenks ... King... Land... Marks... 30 31 32 33 (Nutt... is an overflow record) 11.6 Storing More Than One Record per Address : Buckets Storing More Than One Record per Address : Buckets • Bucket : a block of records sharing the same address Data Structures
11.6 Storing More Than One Record per Address : Buckets Effects of Buckets on Performance • N : # of addresses • b : # of records fit in a bucket • bN : # of available locations for records • Packing density = r/bN • # of overflow records N X [ 1XP(b+1) + 2XP(b+2) + 3XP(b+3)...] • As the bucket size gets larger, performance continues to improve Data Structures
Collision counter =< bucket size 0 / / / / / / / / / / / / / / / / / / / / / / / / / An empty bucket 2 / / / / / / / / / / / / / / / JONES ARNSWORTH Two entries 5 JONES ARNSWORTH STOCKTON BRICE TROOP A full bucket 11.6 Storing More Than One Record per Address : Buckets Bucket Implementation Data Structures
11.6 Storing More Than One Record per Address : Buckets Bucket Implementation(Cont'd) • Initializing and Loading • Creating empty space • Use hash values and find the bucket to store • If the home bucket is full, continue to look at successive buckets • Problems when • No empty space exists • Duplicate keys occur Data Structures
Home address 5 6 6 5 Record Adams Jones Morris Smith Delete Morris 5 6 7 8 Adams... Jones... ###### Smith... Adams... Jones... Morris... Smith... 5 6 7 8 A tombstone for Morris 11.7 Making Deletions Making Deletions • The slot freed by the deletion hinders(disturb) later searches • Use tombstones and reuse the freed slots Data Structures
11.8 Other Collision Resolution Techniques Other Collision Resolution Techniques • Double hashing : avoid clustering with a second hash function for overflow records • Chained progressive overflow : each home address contains a pointer to the record with the same address • Chaining with a separate overflow area : move all overflow records to a separate overflow area • Scatter tables : Hash file contains only pointers to records (like indexing) Data Structures
11.8 Other Collision Resolution Techniques Overflow File • When building the file, if a collision occurs, place the new synonym into a separate area of the file called the overflow section • This method is not recommended; • if there is a high load factor, there will either be overflow from this overflow section or it will be organized sequentially and performance suffers; • if there is a low load factor, much space is wasted Data Structures
11.8 Other Collision Resolution Techniques Linear Probing(1) • When a synonym is identified, search forward from the address given by the hash function (the natural address) until an empty slot is located, and store this record there • This is an example of open addressing (examining a predictable sequence of slots for an empty one) Data Structures
Linear Probing(2) key : E L M P E A G I E C A S A N X R H hash : 1 0 5 1 18 3 8 9 14 7 5 5 1 13 16 12 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A 11.8 Other Collision Resolution Techniques S insertion sequence E A Memory Space A R C H I N G E E X I E H G E A A A C M P L Data Structures X E H I G E E
11.8 Other Collision Resolution Techniques Rehashing(1) • Another form of open addressing • In linear probing, if synonym occurred, incremented r by 1 and searched next location • In rehashing, use a second hash function for the displacement: D = (FOLD(key) mod P) + 1, where P < N is another prime number • This method has the advantage of avoiding congestion, because each synonym under the first hash function likely uses a different displacement D, and this examines a different sequence of slots Data Structures
Rehashing(where P=3)(2) key : E L M P E A G I E C A S A N X R H hash : 1 0 5 1 18 3 8 9 14 7 5 5 1 13 16 12 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A 11.8 Other Collision Resolution Techniques S insertion sequence E A Memory Space A R C H I N G E H E E X N H E A G A A M P L Data Structures E
11.8 Other Collision Resolution Techniques Chaining without Replacement(1) • Uses pointers to build linked lists within the file; each linked list contains each set of synonyms; the head of the list is the record at the natural address of these synonyms • When a record is added whose natural address is occupied, it is added to the list whose head is at the natural address • Linear probing usually used to find where to put a new synonym, although rehashing could be used as well Data Structures
11.8 Other Collision Resolution Techniques Chaining without Replacement(2) • A problem is that linked lists can coalesce: Suppose that H(R1) = H(R2) = H(R4) = i and H(R3) = H(R5) = i+1, and records are added in the order R1, R2, R3, R4, R5. Then the lists for natural addresses i and i+1 coalesce. Periodic reorganization shortens such lists. • Let FWD and BWD be forward and backward pointers along these chains (doubly-linked) Data Structures
11.8 Other Collision Resolution Techniques Chaining without Replacement(3) i R1 R2 i+1 R3 H R4 H R5 Data Structures
11.8 Other Collision Resolution Techniques Chaining with Replacement(4) • Eliminates problem with deletion that caused abandonment to be necessary • Further reduces search lengths • When inserting a new record, if the slot at the natural address is occupied by a record for which it is not the natural address, then record is relocated so the new record may be replaced at its natural address • Synonym chains can never coalesce, so a record can be deleted even if is the head of a chain, simply by moving the second record on the chain to its natural address ( ABANDON thus is no longer necessary ) Data Structures
after R1, R2 added after R3 hashes to i+1 finally R1 R1 i R1 i R2 R3 R3 i+1 i+1 H H R2 R2 H H R4 H R5 2 chains : R1 - R2 -R4 R3 - R5 11.8 Other Collision Resolution Techniques Chaining with Replacement(5) Data Structures
Patterns of Record Access • Pareto Principle ( 80/20 Rule of Thumb): 80 % of the accesses are performed on 20 % of the records! • The concepts of “the Vital Few and the Trivial Many” • 20 % of the fisherman catch 80 % of the fish • 20 % of the burglars steal 80 % of the loot • If we know the patterns of record access ahead, we can do many intelligent and effective things! • Sometimes we can know or guess the access patterns • Very useful hints for file syetms or DBMSs • Intelligent placement of records • fast accesses • less collisions Data Structures