140 likes | 152 Views
Learn about physical database design strategies, index structures, and query optimization techniques to enhance simple query performance in relational databases. Understand how to choose storage structures, create indexes, and refine schemas based on workload analysis.
E N D
Physical Database Design I Simple queries:= no joins, no complex aggregate functions About 25% of Chapter 20 Focus of this Lecture: Designing Databases for Simple Queries
Part 6 • File Organizations • Index Structures centering on B+-trees and Hashing • External Sorting • Physical Database Design I • RAID and Buffer Management • More on Disks • Introduction to Query Optimization • Implementing Joins • Physical Database Design II
Physical Database Designfor Relational Databases • Select Storage Structures (determine how the particular relation is physically stored) • Select Index Structures (to speed up certain queries) • Select … … to minimize the runtime for a certain workload (e.g a given set of queries)
Overview Physical Database Design • After ER design, schema refinement, and the definition of views, we have the conceptual and externalschemas for our database. • The next step is to choose storage structure, indexes, make clustering decisions, and to refine the conceptual and external schemas (if necessary) to meet performance goals. • We must begin by understanding the workload: • The most important queries and how often they arise. • The most important updates and how often they arise. • The desired performance for these queries and updates.
Decisions to Make • How should relations be stored? • What indexes should we create? • Which relations should have indexes? What field(s) should be the search key? Should we build several indexes? • For each index, what kind of an index should it be? • Clustered? Hash/tree? Dynamic/static? Dense/sparse? • Should we make changes to the relational database schema? • Consider alternative normalized schemas? (Remember, there are many choices in decomposing into BCNF, etc.) • Should we ``undo’’ some decomposition steps and settle for a lower normal form? (Denormalization.) • Horizontal partitioning, replication, views ...
Choice of Indexes • One approach: consider the most important queries in turn. Consider the best plan using the current indexes, and see if a better plan is possible with an additional index. If so, create it. • Before creating an index, must also consider the impact on updates in the workload! • Trade-off: indexes can make queries go faster, updates slower. Require disk space, too.
Issues in Index Selection • Attributes mentioned in a WHERE clause are candidates for index search keys. • Exact match condition suggests hash index. • Range query suggests tree index. • Clustering is especially useful for range queries, although it can help on equality queries as well in the presence of duplicates. • Try to choose indexes that benefit as many queries as possible. Since only one index can be clustered per relation, choose it based on important queries that would benefit the most from clustering.
Multi-Attribute Index Keys • To retrieve Emp records with age=30 ANDsal=4000, an index on <age,sal> would be better than an index on age or an index on sal. • Such indexes also called composite or concatenated indexes. • Choice of index key orthogonal to clustering etc. • Composite indexes are larger, updated more often.
Index-Only Plans • A number of queries can be answered without retrieving any tuples the relations themselves!! Relation: Emp(ssn,dno,age,salary,…) Index Give the #-of-employees for each department SELECT E.dno, COUNT(*) FROM Emp E GROUP BY E.dno <E.dno> Index Give avg. salaries of 25-year olds in 3000-5000 range <E. age,E.sal> SELECTAVG(E.sal) FROM Emp E WHERE E.age=25 AND E.sal BETWEEN 3000 AND 5000
Using Index Structures I Assume a relation student(ssn, name, age, gpa…) is given that contains 100000 tuples which are stored in 1000 blocks (100 tuples fit into one block) using heap file organization. Additionally, an index on the age attribute (which is an integer field) has been created that takes 80 blocks of storage, and an index on gpa (which is a real number) has been created that takes 150 blocks of storage. Both index structures are implemented using static hashing, and you can assume that there are no overflow pages. How many block accesses does the best implementation of the following queries take (you can either use the index if helpful or not use the index)? Give reasons for your answers!
Using Index Structures I Assume a relation student(ssn, name, age, gpa…) is given that contains 100000 tuples which are stored in 1000 blocks (100 tuples fit into one block) using heap file organization. Additionally, an index on the age attribute (which is an integer field) has been created that takes 80 blocks of storage, and an index on gpa (which is a real number) has been created that takes 150 blocks of storage. Both index structures are implemented using static hashing, and you can assume that there are no overflow pages. How many block accesses does the best implementation of the following queries take (you can either use the index if helpful or not use the index)? Give reasons for your answers! Remark: Index on X means that the attributes belonging to X are used as the hash-key… Q1) Give the age of all the students that are named “Liu” (assume that there are 23 Liu’s in the database) [2] 1000 (reading the student relation sequentially) Q2) Find all students of age 46 in the database (assume that there are 37 students of that age in the database) [2] 1(index) + 37 (tuple block)
Using Index Structures II Q3) Find the student with the highest GPA in the database (assume there single “best” student in the database) [3] 150(index) + 1 (tuple block) Q4) Give the ssn of all students whose gpa is between 3.4 and 3.6 (assume that there are 500 students that match this condition). [2] 150 (index) + 500 (tuple block) Q5) Delete all students whose age is equal to 53 (there are 5 students of that age) [6] Finding tuples to be deleted: 1 (index) + 5 (tuple blocks) = 6 reads of blocks Updating tuples: 5 writes of block Updating age index: 1 write of index block Updating gpa index: 5 writes of index blocks Total: 6 reads of blocks and 5 writes of blocks
Selecting Composite Index Structure Another Design Problem: Assume we have a relation R(A,B,C,D) that has 1,000,000 tuples that are distributed over 1000 blocks. Moreover static hashing is used to implement index structures (assume no overflow pages and block are 100% filled) and index pointers and the attributes A,B,C, D all require the same amount of storage. Each A value occurs 100 time times and each B value occurs 2000 times in the database. Assume the following query is given: Select D from R where A=value and B=value (returns 20 tuples) Solutions: • Index on B does not help • Index on A --- cost: 1 + 100 • Index on A,B --- cost: 1 + 20 • Index on A,B,D --- index size=1000; index only scan does not help (hashed on A,B) • Index of A and Index on B --- compute block pointers for each index there are 2000 pointers in the B index and 100 pointers in the A index: Cost: 1(finding pointers in Index A) + 1 (finding pointers in index B) + 1?(cost of computing the intersection of index pointer) + 20 (cost of accessing the tuples of the relation)=23 Remark: cost would be higher if number of index pointer to be merged would be larger)
Summary • Indexes must be chosen to speed up important queries (and perhaps some updates!). • Index maintenance overhead on updates to key fields. • Choose indexes that can help many queries, if possible. • Build indexes to support index-only strategies. • Clustering is an important decision; only one index on a given relation can be clustered! • Order of fields in composite index key can be important. • Static indexes may have to be periodically re-built. • Statistics have to be periodically updated.