1 / 45

Lecture 17: Data Storage

Learn about external sorting methods to handle large data sets, including 2-way merge-sort and cost analysis for optimal performance. Understand representing data elements, block storage, and modifying records efficiently in a database system.

rgroves
Download Presentation

Lecture 17: Data Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 17: Data Storage Friday, November 7, 2003

  2. Outline • External Sorting • Representing data elements (chapter 12) • Index structures (13.1, 13.2)

  3. Sorting • Problem: sort 1 GB of data with 1MB of RAM. • Where we need this: • Data requested in sorted order (ORDER BY) • Needed for grouping operations • First step in sort-merge join algorithm • Duplicate removal • Bulk loading of B+-tree indexes.

  4. 2-Way Merge-sort:Requires 3 Buffers in RAM • Pass 1: Read a page, sort it, write it. • Pass 2, 3, …, etc.: merge two runs, write them Runs of length 2L Runs of length L INPUT 1 OUTPUT INPUT 2 Main memory buffers Disk Disk

  5. Two-Way External Merge Sort • Assume block size is B = 4Kb • Step 1  runs of length L = 1MB • Step 2  runs of length L = 2MB • Step 3  runs of length L = 4MB • . . . . . . • Step 10  runs of length L = 1GB (why ?) Need 10 iterations over the disk data to sort 1GB

  6. Can We Do Better ? • Hint:We have 1MB of main memory, but only used 12KB

  7. Cost Model for Our Analysis • B: Block size ( = 4KB) • M: Size of main memory ( = 1MB) • N: Number of records in the file • R: Size of one record

  8. External Merge-Sort • Phase one: load M bytes in memory, sort • Result: runs of length M bytes ( 1MB ) M/R records . . . . . . Disk Disk M bytes of main memory

  9. Phase Two • Merge M/B – 1 runs into a new run (250 runs ) • Result: runs of length M (M/B – 1) bytes (250MB) Input 1 . . . . . . Input 2 Output . . . . Input M/B Disk Disk M bytes of main memory

  10. Phase Three • Merge M/B – 1 runs into a new run • Result: runs of length M (M/B – 1)2 records (250*250MB = 625GB – larger than the file) Input 1 . . . . . . Input 2 Output . . . . Input M/B Disk Disk M bytes of main memory Need 3 iterations over the disk data to sort 1GB

  11. Cost of External Merge Sort • Number of passes: • How much data can we sort with 10MB RAM? • 1 pass  10MB data • 2 passes  25GB data (M/B = 2500) • Can sort everything in 2 or 3 passes !

  12. External Merge Sort • The xsort tool in the XML toolkit sorts using this algorithm • Can sort 1GB of XML data in about 8 minutes

  13. Representing Data Elements • Relational database elements: • A tuple is represented as a record • CREATE TABLE Product ( • pid INT PRIMARY KEY, • name CHAR(20), • description VARCHAR(200), • maker CHAR(10) REFERENCES Company(name) • )

  14. Issues • Represent attributes inside the records • Represent the records inside the blocs

  15. Record Formats: Fixed Length • Information about field types same for all records in a file; stored in systemcatalogs. • Finding i’th field requires scan of record. • Note the importance of schema information! F3 F4 F1 F2 L3 L4 L1 L2 Address = B+L1+L2 Base address (B)

  16. Record Header To schema length F3 F4 F1 F2 L3 L4 L1 L2 header timestamp • Need the header because: • The schema may change • for a while new+old may coexist • Records from different relations may coexist

  17. Variable Length Records Other header information header F3 F4 F1 F2 L3 L4 L1 L2 length Place the fixed fields first: F1 Then the variable length fields: F2, F3, F4 Null values take 2 bytes only Sometimes they take 0 bytes (when at the end)

  18. Records With Repeating Fields Other header information header F3 F1 F2 L3 L1 L2 length Needed e.g. in Object Relational systems,or fancy representations of many-many relationships

  19. Storing Records in Blocks • Blocks have fixed size (typically 4k – 8k) BLOCK R4 R3 R2 R1

  20. Spanning Records Across Blocks • When records are very large • Or even medium size: saves space in blocks block header block header R1 R2 R3 R2

  21. BLOB • Binary large objects • Supported by modern database systems • E.g. images, sounds, etc. • Storage: attempt to cluster blocks together CLOB = character large objec • Supports only restricted operations

  22. Modifications: Insertion • File is unsorted: add it to the end (easy ) • File is sorted: • Is there space in the right block ? • Yes: we are lucky, store it there • Is there space in a neighboring block ? • Look 1-2 blocks to the left/right, shift records • If anything else fails, create overflow block

  23. Overflow Blocks • After a while the file starts being dominated by overflow blocks: time to reorganize Blockn-1 Blockn Blockn+1 Overflow

  24. Modifications: Deletions • Free space in block, shift records • Maybe be able to eliminate an overflow block • Can never really eliminate the record, because others may point to it • Place a tombstone instead (a NULL record)

  25. Modifications: Updates • If new record is shorter than previous, easy  • If it is longer, need to shift records, create overflow blocks

  26. Physical Addresses • Each block and each record have a physical address that consists of: • The host • The disk • The cylinder number • The track number • The block within the track • For records: an offset in the block • sometimes this is in the block’s header

  27. Logical Addresses • Logical address: a string of bytes (10-16) • More flexible: can blocks/records around • But need translation table:

  28. Main Memory Address • When the block is read in main memory, it receives a main memory address • Need another translation table

  29. Optimization: Pointer Swizzling • = the process of replacing a physical/logical pointer with a main memory pointer • Still need translation table, but subsequent references are faster

  30. Pointer Swizzling Block 2 Block 1 Disk read in memory swizzled Memory unswizzled

  31. Pointer Swizzling • Automatic: when block is read in main memory, swizzle all pointers in the block • On demand: swizzle only when user requests • No swizzling: always use translation table

  32. Pointer Swizzling • When blocks return to disk: pointers need unswizzled • Danger: someone else may point to this block • Pinned blocks: we don’t allow it to return to disk • Keep a list of references to this block

  33. Indexes • An index on a file speeds up selections on the search key fields for the index. • Any subset of the fields of a relation can be the search key for an index on the relation. • Search key is not the same as key(minimal set of fields that uniquely identify a record in a relation). • An index contains a collection of data entries, and supports efficient retrieval of all data entries with a given key value k.

  34. Index Classification • Primary/secondary • Clustered/unclustered • Dense/sparse • B+ tree / Hash table / …

  35. Primary Index • File is sorted on the index attribute • Dense index: sequence of (key,pointer) pairs

  36. Primary Index • Sparse index

  37. Primary Index with Duplicate Keys • Dense index:

  38. Primary Index with Duplicate Keys • Sparse index: pointer to lowest search key in each block: • Search for 20 ...but need to search here too 20 is here...

  39. Primary Index with Duplicate Keys • Better: pointer to lowest new search key in each block: • Search for 20 • Search for 15 ? 35 ? 20 is here... ...ok to search from here

  40. Secondary Indexes • To index other attributes than primary key • Always dense (why ?)

  41. Clustered/Unclustered • Primary indexes = usually clustered • Secondary indexes = usually unclustered

  42. Clustered vs. Unclustered Index Data entries Dataentries (Index File) (Data file) DataRecords Data Records CLUSTERED UNCLUSTERED

  43. Secondary Indexes • Applications: • index other attributes than primary key • index unsorted files (heap files) • index clustered data

  44. Applications of Secondary Indexes • Clustered data Company(name, city), Product(pid, maker) Select pid From Company, Product Where name=maker and city=“Seattle” Select city From Company, Product Where name=maker and pid=“p045” Products of company 2 Products of company 3 Products of company 1 Company 1 Company 2 Company 3

  45. Composite Search Keys Examples of composite key indexes using lexicographic order. • Composite Search Keys: Search on a combination of fields. • Equality query: Every field value is equal to a constant value. E.g. wrt <sal,age> index: • age=20 and sal =75 • Range query: Some field value is not a constant. E.g.: • age =20; or age=20 and sal > 10 11,80 11 12 12,10 name age sal 12,20 12 13,75 bob 12 10 13 <age, sal> cal 11 80 <age> joe 12 20 10,12 sue 13 75 10 20 20,12 Data records sorted by name 75,13 75 80,11 80 <sal, age> <sal> Data entries in index sorted by <sal,age> Data entries sorted by <sal>

More Related