1 / 55

Overview of Storage and Indexing

Overview of Storage and Indexing. Instructor: Marina Gavrilova. If you don’t find it in the index, look very carefully through the entire catalogue. -- Sears, Roebuck, and Co., Consumer’s Guide , 1897. Outline of Presentation. Data on external storage – review

Download Presentation

Overview of Storage and Indexing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Storage and Indexing Instructor: Marina Gavrilova If you don’t find it in the index, look very carefully through the entire catalogue. -- Sears, Roebuck, and Co., Consumer’s Guide , 1897

  2. Outline of Presentation • Data on external storage – review • Data Structure for file organization • Heaps and Sorted • Indexes • B and B+ tree Indexes • Hash based indexes • Index classification • Comparing file organization • Assumptions • Cost of operations • Summary • Review Questions

  3. Goal In this lecture we will study different type of data structures for file organization and discuss difference between them based on operations and cost model analysis.

  4. World Largest Databases • 10. World Data Centre for ClimateThis database is controlled and maintained by the German Climate Computing Centre as well as the Max Planck Institute for Meteorology. This database could be examined to find the patterns that led to the severe changes in the climatic conditions.9. National Energy Research Scientific Computing CenterThe National Energy Research Scientific Computing Center is the second largest database of the world. It is controlled by the Lawrence Berkeley National Laboratory in the United States of America. It holds research information related to atomic energy, high energy physics, theories related to various topics, etc.

  5. 8. This is similar to Sprint and is the oldest company that deals with telecommunications. It holds over 310 terabytes of information and almost 2 trillion rows- making the call records extremely extensive. In addition to that, one can also find old records. Therefore, if your grandfather ever made a call using AT&T, the company will probably still have the records  • 7. Google • Google has never made the true size of their database public. However, the type and amount of information found on the website is overwhelming. According to statistics, over 90 million searches are carried out every day. Google has been called the king of internet databases. • 6. Sprint • The telecommunication company has over 50 million subscribers. In the past, it offered long distance packages as well. The records are highly detailed and holds at least 3 trillion rows of database, over 350 call records on a daily basis and 70,000 insertions every second. Sprint is quite notorious and infamous and expands quite rapidly.

  6. 5. ChoicePoint is basically a phone book which contains information about the population residing in the United States of America. It holds criminal histories as well as driving records. It has been said that the database would reach the moon and back at least 75 times. ChoicePoint has helped a number of authorities solve difficult and complicated cases in the past. 4. YouTube YouTube has been in operation for two years and holds a massive library of videos. A loyal user base follows the website and according to records-over a 100 million clips are watched on a daily basis. The size of this database seems to double every 5 months. Therefore, it goes without saying that the overall statistics are staggering. 3. Amazon This website holds over 250,000 textbooks and users can comment and interact with other users which makes Amazon the largest community on the web. Amazon has 55 million customers and above 40 terabytes of data.

  7. 2. CIA database collects information on everything ranging from places to things to people. Even though the accurate size of this database is unknown, it holds both private and public information. 100 articles are added to the library every month and includes population statistics, maps as swell as military capabilities. • 1. Library of Congress • Even after the onset of the digital age, the Library of Congress is still among the largest databases of the world. It holds over 125 million items which consist of colonial newspapers, cook books and government proceedings. The library expands every day. http://www.worldsbiggests.com/2010/02/top-10-largest-databases-in-world.html

  8. Data Structures for File Organizations Many data representations exist, each ideal for some situations, and not so good in others: 2.1 Indexes: Data structures to organize records via trees or hashing. • Like sorted files, they speed up searches for a subset of records, based on values in certain (“search key”) fields • Updates are much faster than in sorted files. K-d Trees

  9. Data Structures for File Organizations • 2.2 Sorted Files:Best if records must be retrieved in some order, or only a `range’ of records is needed. • 2.3 Heap (random order) files:Suitable when typical access is a file scan retrieving all records.

  10. 2.1 Indexes • An index on a file speeds up selections on the search key fields for the index. • Any subset of the fields of a relation can be the search key for an index on the relation. • Search key is not the same as unique key(minimal set of fields that uniquely identify a record in a relation). • An index contains a collection of data entries, and supports efficient retrieval of all data entries k*with a given key value k. • Given data entry k*, we can find record with key k in at most one disk I/O.

  11. B-Trees • A B-Tree is a tree in which each node may have multiple children and multiple keys. • It is specially designed to allow efficient searching for keys. • Like a binary search tree each key has the property that all keys to the left are lower and all keys to the right are greater.

  12. B-Trees B-Tree • From node 10 in the tree all keys to the left are less than 10 and all keys to the right are greater than 10 and less than 20. • The key in a given node represents an upper or lower bound on the sets of keys below it in the tree.

  13. B-Trees • A tree may also have nodes with several ordered keys. For example, if each node can have three keys, then it will also have four references (pointers to children). • In this node (:20:40:60:) the reference to the left of 20 refers to nodes with keys less than 20, the reference between 20 & 40 refers to nodes with keys from 21 to 39, the reference between keys 40 & 60 to nodes with keys between 41 and 59, and finally the reference to the right of 60 refers to nodes with keys with values greater than 61. Node of a B-Tree

  14. B-Trees • Organizational basis of the B-Tree • For m references there must be (m-1) keys in a given node. • Typically a B-tree is specified in terms of the maximum number of successors that a given node may have. • This is also equivalent to the number of references that may occupy a single node, also called the order of the tree. • However, sometimes order is defined as the number of keys (but not in this course).

  15. B-Trees • Constraints • For an order m B-tree no node has more than m subtrees. • Every node except the root and the leaves must have at least m/2 subtrees. • A leaf node must have at least m/2 -1 keys. • The root has 0 or >= 2 subtrees. • Terminal or leaf nodes are all at the same depth. • Within a node, the keys are in ascending order

  16. B-Trees • Construction of a B-Tree • The B-tree is built differently than a binary search tree. • The binary search tree is constructed starting at the root and working toward the leaves. • A B-tree is constructed from the leaves and as it grows the tree is pushed upward.

  17. B-Trees • Construction of a B-Tree • Suppose, the tree of order 4 and each node can hold a maximum of 3 keys. • The keys are always kept in ascending order within a node. • Because the tree is of order 4, every node except the root and leaves must have at least 2 subtrees (or one key which has a pointer to a node containing keys which are less than the key in the parent node and a pointer to a node containing key(s) which are greater than the key in the parent node). • This essentially defines a minimum number of keys which must exist within any given node.

  18. B-Trees • Construction of a B-Tree (continued) • If random data are used for the insertions into the B-tree, it generally will be within a level of minimum height. However, as the data become ordered the B-tree degenerates. • The worst case is for data which is sorted in which case an order 4 B-tree becomes an order 2 tree or a binary search tree. This obviously results in much wasted space and a substantial loss of search efficiency.

  19. B-Tree insertion example

  20. B-Tree insertion Steps for Insertion • If after inserting the node into the appropriate sorted order, no inner node is over its key capacity, the process is finished. • If some node has more than the maximum amount of child nodes then it is split into two nodes, each with the minimum amount of child nodes. This process continues action recursively in the parent node.

  21. B-Trees • Deletions from B-Trees • Deletions also must be done from the leaves. Simple Deletion: Remove some key from the leaf and there are still enough keys in the leaf so that there are (m/2-1) keys in total. The removal of keys from the leaves can occur under two circumstances: - when the key actually exists in the leaf of the tree, and - when the key exists in an internal leaf and must be moved to a leaf by determining which leaf position contains the key closest to the one to be removed.

  22. B-Tree deletion • Locate the in-order successor of the key to remove and replace it with the key • If the leaf node is in legal state (min capacity not violated) then finished. • If some inner node is in an illegal state then: • Redistribute Its siblings node (a child of the same parent node) can transfer one of its keys to the current node. • Concatenate Its siblings does not have an extra key to share. In that case both these nodes are merged into a single node (together with a key from a parent) and pointers updated accordingly. The process continues until the parent node remains in a legal state or until the root node is reached.

  23. B-Trees • Efficiency of B-Trees Height: • Same as the height of a binary tree. • In binary tree, the height of a binary tree is related to the number of nodes through log2. • Here, the height of a B-Tree is related through log m where m is the order of the tree: height = logm n + 1

  24. Summary • A B-Tree is a tree in which each node may have multiple children and multiple keys. • It is specially designed to allow efficient searching for keys and is much more compact than BST tree. • B-tree insertion involves splitting the node • Insertion is easier than deletion operation

  25. B and B+ Tree Indexes Non-leaf Pages Leaf Pages (Sorted by search key) • Leaf pages containdata entries, and are chained (prev & next) • Non-leaf pages have index entries; only used to direct searches: index entry P K P K P P K m 0 1 2 1 m 2

  26. Example B+ Tree Note how data entries in leaf level are sorted Root 17 • Find 28*? 29*? All > 15* and < 30* • Insert/delete: Find data entry in leaf, then change it. Need to adjust parent sometimes. • And change sometimes propagates to the root! Entries <= 17 Entries > 17 27 5 13 30 39* 2* 3* 5* 7* 8* 22* 24* 27* 29* 38* 33* 34* 14* 16*

  27. Hash-Based Indexes • Approaches to Search • Sequential and list methods • (lists, tables, arrays). • 2. Direct access by key value (hashing) • 3. Tree indexing methods.

  28. Definition Hashingis the process of mapping a key value to a position in a table. A hash function maps key values to positions. A hash table is an array that holds the records. Searching in a hash table can be done in O(1) regardless of the hash table size.

  29. Applications of Hashing • Compilers use hash tables to keep track of declared variables • A hash table can be used for on-line spelling checkers — if misspelling detection (rather than correction) is important, an entire dictionary can be hashed and words checked in constant time • Game playing programs use hash tables to store seen positions, thereby saving computation time if the position is encountered again • Hash functions can be used to quickly check for inequality — if two elements hash to different values they must be different • Storing sparse data

  30. Hash-Based Indexes • Index is a collection of buckets. • Bucket = primary page plus zero or moreoverflow pages. • Buckets contain data entries. • Hashing functionh: h(r) = bucket in which (data entry for) record r belongs. h looks at the search key fields of r. • No need for “index entries” in this scheme.

  31. Alternatives for Data Entry k*in Index • In a data entry k* we can store: • Data record with key value k, or • <k, rid of data record with search key value k>, or • <k, list of rids of data records with search key k> • Choice of alternative for data entries depends on the indexing technique used to locate data entries with a given key value k. • Examples of indexing techniques: B+ trees, hash-based structures • Typically, index contains auxiliary information that directs searches to the desired data entries

  32. Alternatives for Data Entries (Contd.) • Alternative 1: Data record with key value k • If this is used, index structure is a file organization for data records. • At most one index on a given collection of data records can use Alternative 1. (Otherwise, large data records are duplicated, leading to redundant storage and potential inconsistency.) • If data records are very large, # of pages containing data entries is high. Implies size of auxiliary information in the index is also large, typically.

  33. Alternatives for Data Entries • Alternative 2 <k, rid of data record with search key value k>: • Data entries typically much smaller than data records. So, better than Alternative 1 with large data records, especially if search keys are small. • Alternative 3 <k, list of rids of data records with search key k> • Alternative 3 more compact than Alternative 2, but leads to variable sized data entries even if search keys are of fixed length.

  34. Index Classification • Primary vs. secondary: If search key contains primary key, then called primary index. • Unique index: Search key contains a candidate key. • Clustered vs. un-clustered: If order of data records is the same as, or `close to’, order of data entries, then called clustered index. • A file can be clustered on at most one search key. • Cost of retrieving data records through index varies greatly based on whether index is clustered or not!

  35. Clustered vs. Unclustered Index • Suppose that Alternative (2) is used for data entries, and that the data records are stored in a Heap file. • To build clustered index, first sort the Heap file (with some free space on each page for future inserts). • Overflow pages may be needed for inserts. (Thus, order of data recs is `close to’, but not identical to, the sort order.) Index entries UNCLUSTERED CLUSTERED direct search for data entries Data entries Data entries (Index File) (Data file) Data Records Data Records

  36. Clustering is the unsupervised classification of patterns (observations, data items or feature vectors) into groups (clusters). – A.K. Jain, M. N. Murty, P. J. Flynn, Data Clustering: A Review Clustering Definition Clustering a collection of points

  37. Clustering Properties • Linear increase in processing time with increase in size of dataset (Scalability). • Ability to detect clusters of different shapes and densities. • Minimal input parameter. • Robust with regard to noise. • Insensitive to data input order. • Extensible to higher dimensions. Osmar R. Zaΐane, Andrew Foss, Chi-Hoon Lee, Weinan Wang, “On Data Clustering Analysis: Scalability, Constraints and Validation”, Advances in Knowledge Discovery and Data Mining, Springer-Verlag, 2002.

  38. Clusters t7.10k dataset (9 visible clusters, n = 10,000)

  39. A E F B G C Clustering results on t7.10k dataset Osmar R. Zaΐane, Andrew Foss, Chi-Hoon Lee, Weinan Wang, “On Data Clustering Analysis: Scalability, Constraints and Validation” H D

  40. Example: Indexing & Searching One of the World's Largest Compound Database • http://accelrys.com/resource-center/case-studies/pharmacopeia-database.html The Data Challenge Pharmacopeia's corporate compound collection contains over seven million molecules. These are typically small drug-like molecules: organic compounds, of which 99% fall within a range of molecular weights range from 250 to 750. A typical biotechnology company has a database of only hundreds of thousands of compounds, while major pharmaceuticals may have collections approaching the same order of magnitude as Pharmacopeia's. A Single, Integrated, Open System Pharmacopeia has tested the DS Accord Chemistry Cartridge for use in data mining and subsequent analysis to assess novel libraries of compounds that are planned for synthesis. Creating such a library can take over a man-year. To enable such library construction and analysis, Accord users will need to routinely conduct sophisticated and varied searches on the multi-million compound database. Pharmacopeia's developers took just a couple of hours to build a prototype of a customized client user interface using Oracle forms and PL/SQL. This enabled substructure searching within their internal compound collection.

  41. Cost Model for Our Analysis We ignore CPU costs, for simplicity: • B: The number of data pages • R: Number of records per page • D: (Average) time to read or write disk page • Measuring number of page I/O’s ignores gains of pre-fetching a sequence of pages; thus, even I/O cost is only approximated. • Average-case analysis; based on several simplistic assumptions. • Good enough to show the overall trends!

  42. Comparing File Organizations • Heap files (random order; insert at eof) • Sorted files, sorted on <age, sal> • Clustered B+ tree file, Alternative (1), search key <age, sal> • Heap file with unclustered B + tree index on search key <age, sal> • Heap file with unclustered hash index on search key <age, sal>

  43. Operations to Compare • Scan: Fetch all records from disk • Equality search • Range selection • Insert a record • Delete a record

  44. Assumptions in Our Analysis • Heap Files: • Equality selection on key; exactly one match. • Sorted Files: • Files compacted after deletions. • Indexes: • Alt (2), (3): data entry size = 10% size of record • Hash: No overflow buckets. • 80% page occupancy => File size = 1.25 data size • Tree: 67% occupancy (this is typical). • Implies file size = 1.5 data size

  45. Assumptions (contd.) • Scans: • Leaf levels of a tree-index are chained. • Index data-entries plus actual file scanned for unclustered indexes. • Range searches: • We use tree indexes to restrict the set of data records fetched, but ignore hash indexes.

  46. Cost of Operations • B: The number of data pages • R: Number of records per page • D: (Average) time to read or write disk page • Several assumptions underlie these (rough) estimates!

  47. Cost of Operations • Several assumptions underlie these (rough) estimates!

  48. Choice of Indexes • What indexes should we create? • Which relations should have indexes? What field(s) should be the search key? Should we build several indexes? • For each index, what kind of an index should it be? • Clustered? Hash/tree?

  49. Choice of Indexes (Contd.) • One approach: Consider the most important queries in turn. Consider the best plan using the current indexes, and see if a better plan is possible with an additional index. If so, create it. • Obviously, this implies that we must understand how a DBMS evaluates queries and creates query evaluation plans! • For now, we discuss simple 1-table queries. • Before creating an index, must also consider the impact on updates in the workload! • Trade-off: Indexes can make queries go faster, updates slower. Require disk space, too.

More Related