1 / 58

Indexing Techniques for Multimedia Databases

Indexing Techniques for Multimedia Databases. Multimedia Similarity Search Structure Image Indexing Video Indexing. Traditional DBMS. Designed to manage one-dimensional datasets consisting of simple data types, such as strings and numbers

jaguar
Download Presentation

Indexing Techniques for Multimedia Databases

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Indexing Techniques for Multimedia Databases Multimedia Similarity Search Structure Image Indexing Video Indexing

  2. Traditional DBMS • Designed to manage one-dimensional datasets consisting of simple data types, such as strings and numbers • Limited kinds of queries: exact match, partial match, and range queries • Well-understood indexing methods: B-trees, hashing

  3. The indexing structure should be able to satisfy similarity-based queries for a wide range of similarity measures. Characteristic of Multimedia Queries • We normally retrieve a few records from a traditional DBMS through the specification of exact queries based on the notions of “equality”. • The types of queries expected in an image/video DBMS are relatively vague or fuzzy, and are based on the notion of “similarity”.

  4. Content-Based Retrieval • It is necessary to extract the featureswhich are characteristics of the image and index the image on these features. Examples: Shape descriptions, texture properties. • Typically there are a few different quantitative measures which describes the various aspect of each feature. Example: The texture attribute of an image can be modeled as a 3-dimensional vector with measures of directionality, contrast, and coarseness.

  5. Introduction • Multimedia require support of multi-dimensional datasets • E.g., a 256 dimensional feature vector. • That implies • Specialized kinds of queries • New indexing approaches. Two choices: • Map n-dimensional data to a single dimension and use traditional indexing structures (B-trees) • Develop specialized indexing structures

  6. Low-Dimensional Indexing Applications • Spatial Databases (GIS, CAD/CAM) • Number of dimensions: 2-4 • Spatial queries. For example: • Which objects intersect a given 2D or 3D rectangle • Which objects intersect a given object • Specialized indexing structures • quad-tree, BSP-tree, K-D-B-tree, R-tree, R+-tree, R*-tree, X-tree, …

  7. High-Dimensional (HD) Indexing Applications • Multimedia databases (Images, Sounds, Movies) • Map multimedia object to a n-dimensional point called feature vector • Number of dimensions: typically 256 - 1000 • Indexing: • Actually index only feature vectors • Data structures used: • same as for spatial databases (R-Trees, X-trees) • or, structures tailored to index specifically feature vectors(TV-Tree)

  8. HD Considerations (1) • Main problem: • In general there is no total-ordering of d-dimensional objects that preserves spatial proximity • Data comes in two forms • N-dimensional points • N-dimensional objects extended in space • Objects can have rather complex shapes (extents) • Typically abstract from the actual form and index some simpler shapes, such as Minimum Bounding Boxes (MBB) or n-dimensional hyper spheres

  9. HD Considerations (2) • “Dimensionality curse” • As the number of dimensions increases • performance tends to degrade (often exponentially) • Indexing structures become inefficient for certain kinds of queries • Performance is often CPU-bound, not just I/O-bound as in traditional DBMS

  10. HD Queries Overview • No standard algebra or query language • The set of operators strongly depends on application domain • Queries are usually expressed by an extension of SQL (e.g. abstract data types) • Although there are no standards, some queries are common

  11. Multiattribute and Spatial Indexing of Multimedia Objects • Spatial Databases: Queries involve regions that are represented as multidimensional objects. Example: A rectangle in a 2-dimensional space involves four values: two points and two values for each point. Access methods that index on multidimensional keys yield better performance for spatial queries. • Multimedia Databases: Multimedia objects typically have several attributes that characterize them. Example: Attributes of an image include coarseness, shape, color, etc. • Multimedia databases are also good candidates for multikey search structures.

  12. Measure of Similarity A suitable measure of similarity between an image feature vector F and query vector Q is the weighted metric W: where A is an nxn matrix which can be used to specify suitable weighting measures.

  13. Similarity Based on Euclidean Distance 3 2 3 2 é ù é ù é ù é ù ê ú ê ú ê ú ê ú F = 4 F = 4 F = 4 Q = 4 ê ú ê ú ê ú ê ú 1 2 3 ê ú ê ú ê ú ê ú 6 7 7 6 ë û ë û ë û ë û 1 0 0 1 é ù é ù ê ú ê ú [ ] D(F1 ,Q) = 1 0 0 × 0 1 0 × 0 = 1 ê ú ê ú ê ú ê ú 0 0 1 0 ë û ë û 1 0 0 0 é ù é ù ê ú ê ú [ ] = 0 0 1 × 0 1 0 × 0 = 1 D(F2 ,Q) ê ú ê ú ê ú ê ú 0 0 1 1 ë û ë û 1 0 0 1 é ù é ù ê ú ê ú [ ] = 1 0 1 × 0 1 0 × 0 = 2 D(F3 ,Q) ê ú ê ú ê ú ê ú 0 0 1 1 ë û ë û

  14. Similarity Based on Euclidean Distance (cont.) Feature 2 F1 Q F2 F3 Feature 1 Points which lie at the same distance from the query point are all equally similar, e.g., F1 and F2.

  15. Similarity Based on Weighted Euclidean Distance where A is the diagonal. Example: 4 3 3 1 0 0 é ù é ù é ù é ù ê ú ê ú ê ú ê ú F = 5 F = 5 Q = 5 A = 0 1 0 ê ú ê ú ê ú ê ú 1 2 ê ú ê ú ê ú ê ú 7 8 7 0 0 2 ë û ë û ë û ë û 1 0 0 1 é ù é ù ê ú ê ú [ ] D(F1 ,Q) = 1 0 0 × 0 1 0 × 0 = 1 ê ú ê ú ê ú ê ú 0 0 2 0 ë û ë û 1 0 0 0 é ù é ù ê ú ê ú [ ] = 0 0 1 × 0 1 0 × 0 = 2 D(F2 ,Q) ê ú ê ú ê ú ê ú 0 0 2 1 ë û ë û D(F1 ,Q) < D(F2 ,Q) F1 is more similar to Q

  16. 2 S 1 2 S 2 i S 2 2 S 3 How to determine the weights ? The variance of the individual feature measures can be used as their weights. é ù 0 0 ú ú : the variance of the i-th feature measures. A = 0 0 ê ú ú ê 0 0 ë û Rationale: A feature with a larger variance is more discriminating.

  17. Query Types Querying in image DBMS is envisioned to be iterative in nature: • Vague Queries: Queries at the earlier stage can be very “loose”. Retrieve images containing textures similar to this sample. • K-nearest-neighbor-queries: The user specifies the number of close matches to the given query point. Retrieve 10 images containing textures directionally similar to this sample • Range queries: An interval is given for each dimension of the feature space and all the records which fall inside this hypercube are retrieved. . . . . . . . + . . . + r + .Q . . . Q Q r is large r is small range query => vague query => 3-nearest neighbor query

  18. Indexing Multimedia Objects Feature Y O2. .O1 Feature X • Can’t we index multiple features using a B+-tree ? • B+-tree defines a linear order • Similar objects (e.g., O1 and O2) can be far apart in the indexing order • Why multidimensional indexing ? • A multidimensional index defines a “spatial order” • Conceptually similar objects are spatially near each other in the indexing order (e.g., O1 and O2)

  19. Some Multidimensional Search Structures • Space Filling Curves • k-d Trees • Multidimensional Tries • Grid File • Point-Quad Trees • R Trees, R*, TV, SS • D-Trees • VA files

  20. Space Filling Curves • Assume that each dimension is represented by a fixed bit width number • Partition the universe with a grid • Label each grid cell with a unique number called the curve value • For points, store that number in a traditional one-dimensional index • Objects can be handled through decomposition into multiple cells Z-ordering Curve with 2 bits

  21. k-d Trees • k-d tree is a multidimensional binary search tree. • Each node consists of a “record” and two pointers. The pointers are either null or point to another node. • Nodes have levels and each level of the tree discriminates for one attribute. • The partitioning of the space with respect to various attributes alternates between the various attributes of the n-dimensional search space. Example: 2-D tree Discriminator Input Sequence A = (65, 50) B = (60, 70) C = (70, 60) D = (75, 25) E = (50, 90) F = (90, 65) G = (10, 30) H = (80, 85) I = (95, 75) A(65, 50) X Y X Y · B(60, 70) · · C(70, 60) F(90, 65) · · · · D(75, 25) G(10,30) E(50,90) · · H(80, 85) I(95, 75)

  22. k-d Tree: Search Algorithm (..., KA(L), ...) L M = Low(L) N = High(L) • Notations: • Algorithm: Search for P(K1, ..., Kn) Q := Root; /* Q will be used to navigate the tree */ While NOT DONE DO the following: if Ki(P) = Ki(Q) for i = 1, ..., n then we have located the node and we are DONE Otherwise if A = Disc(Q) and KA(P) < KA(Q) then Q := Low(Q) else Q := High(Q) • Performance: O(logN), where N is the number of records N M Disc(L) : The discriminator at L’s level KA(L) : The A-attribute value of L Low(L) : The left child of L High(L) : The right child of L

  23. 70 60 50 40 30 20 10 10 20 30 40 50 60 70 80 90 Multidimensional Tries Example: Construction of a 2D tries • Multidimensional tries, or k-d tries, are similar to k-d tree except that they divide the embedding space. • Each split evenly divides a region Partitioning of the space Insert A(65,50): 1 3 Y X<=50 X>50 4 · C(70, 60) A(65, 50) · 5 B(60,70) · 2 Insert B(60, 70): X>50 X<=50 A(65,50) 6 · D(75,25) Y>50 Y<=50 7 B(60, 70) A(65,50) X Insert C(70,60): Insert D(75, 25): X>50 X<=50 X<=50 X>50 Y<=50 Y>50 Y<=50 Y>50 X>75 X<=75 X<=75 X>75 A(65,50) X<=75 X>75 Y<=25 Y>25 Y<=75 Y>75 X<=75 Y>75 D(75,25) A(65,50) X<=62.5 X>62.5 X<=62.5 X>62.5 B(60, 70) C(70, 60) B(60,70) C(70,60)

  24. Multidimensional Tries: Using Buckets Disadvantage: The maximum level of decomposition depends on the minimum separation between two points. A solution: Split a region only if it contains more than p points.

  25. 100 4 75 3 50 2 25 1 0 Grid Files 100 A B C D Grid directory linear scale 75 D E F G 50 H I J J 25 Data bucket K K L M 0 25 50 75 100 1 2 3 4 0 25 50 75 100 Split Strategy: The partitioning is done with only one hyperplane, but the split extends to all the regions in the splitting direction 1. The directory is quite sparse. 2. Many adjacent directory entries may point to the same data block. 3. For partial-match and range queries, many directory entries, but only few data blocks, may have to be scanned.

  26. Point-Quad Trees • Each node of a k-dimensional quad tree partitions the object space into k quadrants. • The partitioning is performed along all search dimensions and is data dependent, like k-d trees. Example: Partitioning of the space The quad tree A D(35,85) · B(75,80) · P SE NE · NW B SW C(90,65) · D NE E A(50,50) SE NW SW C E(25,25) · • To insert P(55, 75): • Since XA< XP and YA < YP go to NE (i.e., B). • Since XB > XP and YB > YP go to SW, which in this case is null.

  27. Spatial Index Trees • We will talk about data normalized in the range [0, 1] for all the dimensions. • Minimum Bounding Region (MBR) refers to the smallest region (rectangle, circle) that encloses the entire shape of the objects or all the data points.

  28. R-tree • R-trees are higher generalizations of B-trees. • The nodes correspond to disk pages. • All leaf nodes appear at the same level. • Root and intermediate nodes corresponds to the smallest rectangle that encloses its child nodes, i.e., containing[r, <page pointer>]pairs. • Leaf nodes contain pointers to the actual objects, i.e., containing[r, <RID>]pairs. • A rectangle may be spatially contained in several nodes (e.g., J ), yet it can be associated with only one node.

  29. R-Trees • Hierarchy of nested d-dimensional intervals (boxes). • Each node v corresponds to a disk page & d-dimensional interval, . • Store MBB or MBR of n-dimensional object. • Permits overlap of index entries. • Index used as filter mechanism for query. • Every node contains between m and M entries unless it is a root. • The root node has at least 2 entries unless it is a leaf. • Height-balanced. Which of the above properties are similar to - trees ?

  30. R-tree: Insertion • A new object is added to the appropriate leaf node. • If insertion causes the leaf node to overflow, the node must be split, and the records distributed in the two leaf nodes. • Minimizing the total area of the covering rectangles • Minimizing the area common to the covering rectangles • Splits are propagated up the tree (similar to B-tree).

  31. R-tree: Delete • If a deletion causes a node to underflow, its nodes are reinserted (instead of being merged with adjacent nodes as in B-tree). • There is no concept of adjacency in an R-tree.

  32. A E F B G C A E F B G C D-tree: Domain Decomposition If the number of objects inside a domain exceeds a certain thresholds, the domain is split into two subdomains. Example 1: Horizontal Split A subdomain G F Split line F G E E D B D A border object C A B Original domain A C Example 2: Vertical Split Split along longest dimension Original domain D A subdomain D

  33. D2 D-tree: Split Examples D-tree Embedding Space D Initial tree: D null After 3 insertions: D Domain node Data node D1 D2 After 1st split: D1 D2 null null D11 D11 D2 D12 After 2nd split: D12 null

  34. D-tree: Split Example (continued) Embedding Space D-tree After 3rd split: D121 D11 D2 D122 D11 D2 D121 D122 Internal node After 4th split: D1 D2 D11 D21 External node D122 D121 D22 D11 D121 D122 D21 D22 D22.P

  35. D-tree: Range Queries Note: A range query can be represented as a hypercube embedded in the search space. Search Strategy: • Retrieve the set, say S, of all subdomains which overlap with the query cube. • For each subdomain, in S, which is not fully contained in the query cube, discard the objects falling outside the query cube. Algorithm: Search(D_tree_root, search_cube) Current_node = D_tree_root For each entry in Current_node, say (D, P), if D overlaps with search_cube, we do the following: • If Current_node is an external node, retrieve the objects, in D.P, which fall within the overlap region. • If Current_node is an internal node, call Search(D.P, search_cube).

  36. D-tree: Desirable Properties • D-trees are balance • The search path for an object is unique  No redundant searches. • More splits occur in the denser regions of the search space.  Objects are evenly distributed among the data nodes. • Similar objects are physically clustered in the same, or neighboring data nodes. • Good performance is ensured regardless of the insertion order of the data.

  37. Content-Based Image Indexing • Keyword Approach • Problem: there is no commonly agreed-upon vocabulary for describing image properties. • Computer Vision Techniques • Problem: General image understanding and object recognition is beyond the capability of current computer vision technology. • Image Analysis Techniques • It is relatively easy to capture the primitive image properties such as • prominent regions, • their colors and shapes, • and related layout and location information within images. • These features can be used to index image data.

  38. Possible Features • Edge • Region • Color • Shape • Location • Size • Texture

  39. EDGE • Types of Edges – Step, Ramp, Spike and Roof.

  40. 3 stages in edge detection • Filtering : Image is passed through a filter in order to remove noise.(ex Mean filter, Gaussian filter) • Differentiation : highlights the locations where intensity changes are significant. • Detection

  41. Classes of edge detection schemes • Prewit, Robert, Sobel, and Laplacian – 3x3 and 5x5 gradient operators • Hueckel, Hartly and Haralick’s – surface fitting • Canny - the derivatives of Gaussian

  42. Canny Edge Detector • The results of choosing the standard deviation sigma of the edge detectors as 3. horizontal edges vertical edges lena.gif norm of the gradient after thresholding after thinning

  43. Features Acquisition: Region Segmentation • Group adjacent pixels with similar color properties into one region, and • segment the pixels with distinct color properties into different regions.

  44. Definition of Segmentation • All pixels must have the same .. • All pixels must not differ by more than .. • All pixels must not differ by more than T from the mean .. • The standard deviation must small ..

  45. Simple Segmentation • B(x, y) = 1 if T1 < f(x, y) < T2 0 otherwise • Thresholds and Histogram • Connected Component Algorithms • Recursive Algorithm • Sequential Algorithm

  46. Seed Segmentation • Compute the histogram • Smooth the histogram by averaging to remove small peaks • Identify candidates peaks and valleys • Detect good peaks by peakiness test • Segment the image using thresholds • Apply connected component algorithm

  47. Region Growing • Split and Merge Algorithm • Phagocyte Algorithm • Likelihood Ratio Test

  48. Region Segmentation • EDISON • JSEG

  49. Color • We can divide the color space into a small number of zones, each of which is clearly distinct with others for human eyes. • Each of the zones is assigned a sequence number beginning from zero. Notes: It is proven that human eyes are not very sensitive to colors. In fact, users only have a vague idea about the colors they want to specify.

  50. Shape • Shape feature can be measured by properties: • Circularity, major axis orientation, and Moment. • Circularity: • Notes: The more circular the shape, the closer to one • the circularity. • Major Axis Orientation: • Moment : the first and the second r a a 2a a

More Related