470 likes | 615 Views
Video Search : What’s New. Gloria Rohmann NYU Libraries October 14, 2005. The problem: I know it’s in there somewhere…. “Gist” (what it’s about) Genre Style Scenes People Objects Dialogue Soundtrack. Video Search: H ow does it work?.
E N D
Video Search:What’s New Gloria Rohmann NYU Libraries October 14, 2005
The problem: I know it’s in there somewhere… • “Gist” (what it’s about) • Genre • Style • Scenes • People • Objects • Dialogue • Soundtrack
Video Search: How does it work? • “Conventional” methods: catalogs, databases and analog previewing • Why digitize? • Discovering video structure • Automatic and manual indexing • Data models & user interfaces • Prospects for the future: mobile and web services
Conventional Methods: Browse and Search • Structured databases: • AV cataloging (AACR2, MARC 21) • Shot lists • Asset management systems • “Pathfinders” (librarians, archivists) • Embedded markers: hints, chapters, scenes (DVD) • Video logging systems • Hardware browse/skim: FF, slow-mo, etc.
Sample screen from BobCat: video record 505: Contents 520: Summary 650: Subject headings
Enhanced metadata: shot lists, transcripts: Open University video collection
Footage - Opening creditsChocolate factory workers. Alan Coxon and Kathy Sykes preparing food. Man biting into chocolate bar (0'00-0'50") Alan opening fridge and walking over to Kathy at table. Kathy grating orange. Alan showing ingredients for cheesecake. Cookingchocolate. Alan and Kathy breakingchocolate and smelling it. Breakichocolate. Kathy tasting chocolate (0'51"-"2'00" Footage - Opening credits Chocolate factory workers. Alan Coxon and Kathy Sykes preparing food. Man biting into chocolate bar (0'00-0'50") Alan opening fridge and walking over to Kathy at table. Kathy grating orange. Alan showing ingredients for cheesecake. Cooking chocolate. Alan and Kathy breaking chocolate and smelling it. Breaking chocolate.Kathy tasting chocolate (0'51"-"2'00“)
Browse and skim: DVD’s Digital Advantages Pause, FF, rewind Navigate Frame-by-frame menus, chapters or tracks Insert markers, repeat play Change audio, subtitle languages, show closed captioning Shuttle/scrub onscreen
Media Player ExampleDVD player clones; can be enhanced with SDKs File markers; added by end-user Play speed settings: 0.5 >> 3X Start, stop, pause, rewind to beginning, FF to end, advance by frame
What Is Video? Authored video has: • Series of still images @25-30 fps • Structure: frames >> shots >> scenes • MODALITIES • (Audio tracks) • (Text: captioning, subtitles, etc.) • (Graphics: logos, running tickers etc.) • Production metadata: timestamp, datestamp, flash on/off
Advantages of Digital Video Store and deliver over networks Allow analysis by computers Allow auto & manual indexing USING: • Image processing • Signal processing • Information visualization
Why Compress Video? • 1 frame (@TV brightness) = 0.9 megabytes (MB) of storage • At 29 fps, each second = 26.1 MB of storage • 30 minute film = 53 gigabytes (GB) of storage OBJECT: Make file smaller; retain as much information as possible
Encoding Formats These formats use some kind of compression; similar encoding methods—many CODECS—some “lossy,” others “lossless” • AVI: audio-video interleave or interactive • QuickTime • MPEG family: MPEG-1, 2, 4 • H261: for video conferencing • New: H264; JPEG 2000
CODECS • Compressor/Decompressor, or Coder/Decoder • Produce and work with encoding formats. • Central to compression and encoding; perform signal and image processing tasks • Examples: Cinepak, Indeo, Windows Media Video. • MPEG-4: DivX, Xvid, 3ivX implementations of certain compression recommendations of MPEG-4.
How Do CODECS Work? • Movement creates “temporal aliasing”: human eye/brain fills in the gaps • Blurring produced by camera shutter softens edges • Modeled by CODECS and algorithms • Goal: acceptable facsimile of moving scene
Configuring CODECS for analysis Psychovisual enhancements Maximum Keyframe Interval
What looks best to you? Segmentation method B Segmentation method A Original image Jermyn, I. Psychovisual Evaluation of Image Database Retrieval and Image Segmentation
Encoding Methods: predictive • Sampling: value of function @ regular intervals (example: brightness of pixels) • Quantization: frequency of sampling (1 in 10 vs. 1 in 100 frames) • Discrete cosine transforms (DCT) an array of data (not just one pixel) is transformed into another set of values. • Inter-frame vs. Intra-frame encoding
Video Structure Video Scene Shot Frame
Shot Boundary Detection Algorithms that compare the similarities between nearby frames. When the similarities fall below a pre-determined level, the limit of a “shot” is automatically defined: • Edge detection • Compare color histograms • Compare motion vectors
Spatial & Temporal Segmentation 1. Use shot boundary detection and keyframes to define shots & choose representative frames 2. Use CBIR (Content-based Image Retrieval) techniques to reveal features in representative frames (shapes, colors, textures)
CBIR Techniques Images (frames) have no inherent semantic meaning: only arrays of pixel intensities • Color Retrieval: compare histograms • Texture Retrieval: relative brightness of pixel pairs • Shape Retrieval: Humans recognize objects primarily by their shape • Retrieval by position within the image
MPEG-4: Content-based encoding Video object plane (VOP) Video object plane (VOP) Background encoded only once Ghanbari, M. (1999) Video Coding: An Introduction to Standard Codecs
AMOS: Tracking Objects Beyond the Frame http://www.ctr.columbia.edu/~dzhong/rtrack/demo.htm
“Are We Doing Multimedia?”*Multimodal Indexing Ramesh Jain: “To solve multimedia problems, we should use as much context as we can.” • Visual (frames, shots, scenes) • Audio (soundtrack: speech recognition) • Text (closed captions, subtitles) • Context—hyperlinks, etc. *IEEE Multimedia. Oct-Nov. 2003http://jain.faculty.gatech.edu/media_vision/doing_mm.pdf
Multimodal Indexing Settings, Objects, People Modalities: Video, audio, text Snoek, C., Worring, M. Multimodal Indexing: A Review of the State-of-the-art. Multimedia Tools & Applications. January 2005
Building Video Indexes Same as any indexing process…decide: • What to index: granularity • How to index: modalities (images, audio, etc.) • Which features? • Discover spatial and temporal structure: deconstructing the authoring process • Construct data models for access
Building Video Indexes:Structured modeling Predict relationship between shots: • Pattern recognition • Hidden Markov Models • SVM (support vector machines) • Neural networks • Relevance feedback via machine learning
Data Models for Video IR • Based on text (DBMS, MARC) • Semi-structured (video + XML or hypertext): MPEG-7, SMIL • Based on context: Yahoo Video, Blinkx, Truveo • Multimodal: Marvel, Virage
Virage VideoLoggerTM Mark & annotate clips SMPTE timecode Keyframes Text or audio extracted automatically
MPEG-7 Output from IBM Annotation Tool Duration of shot in frames • -<MediaTime> • <MediaTimePoint>T00:00:27:20830F30000</MediaTimePoint> • <MediaIncrDuration mediaTimeUnit="PT1001N30000F">248</MediaIncrDuration> • </MediaTime> • -<TemporalDecomposition> • -<VideoSegment> • -<MediaTime> • <MediaTimePoint>T00:00:31:23953F30000</MediaTimePoint> • </MediaTime> • -<SpatioTemporalDecomposition> • -<StillRegion> • -<TextAnnotation> • <FreeTextAnnotation>Indoors</FreeTextAnnotation> • </TextAnnotation> • -<SpatialLocator> • <Box mpeg7:dim="2 2">14 15 351 238</Box> • </SpatialLocator> • </StillRegion> Annotation Location and dimension of spatial locator in pixels
SMIL: Hypertext & Hypermedia <window type="generic" duration="1:30:00" height="480" width="320“ underline_hyperlinks="true" /> <font face="arial" size="2"> <ol> <li><a href="command:seek(0:0)" target="_player">Intro</a></li> <br/> <li> <a href="command:seek(2:10)" target="_player">Q1 to Kerry</a>, <a href="command:seek(4:26)" target="_player">Bush rebuttal</a> </li>
Scholarly “Primitives”* Low-level methods for higher-level research • Discovering • Annotating • Comparing • Referring • Sampling • Illustrating • Representing *Unsworth, John. (2000) Scholarly Primitives: what methods do humanities researchers have in common, and how might our tools reflect this?
Discovering Annotating Comparing Referring Sampling Illustrating Representing Browse, query text Browse surrogates Interactive filtering: dynamic query based on visual aspects Interactive zooming Interactive distortion Compare results for feedback Annotate results User Interfaces for Video IR
IBM Research MARVel MPEG-7 video search engine • Manual annotations are used for machine learning • Automatic multimodal indexing • Image processing • Automatic speech recognition • Structured modeling: clustering by comparing features http://www.research.ibm.com/marvel
Video Search on the Web: Yahoo • Uses existing (text) metadata • Does not analyze content of media stream • Horowitz: “Web pages are self-describing” • Analyze the web page around the link • Analyze the metadata included in video file • Media RSS: publishers can add links to multimedia within feed
Video Search on the Web: Google • Using metadata in the video stream • Almost all broadcast news video is closed captioned • Google ingests video with closed captioning • Transcripts are created linked to time-code • Transcripts are indexed • “Thumbnails” grabbed at time intervals • Still text-based; “thumbnails” provide visual surrogate
Opportunities for Research • User needs • User interfaces • Classification and description • Metadata: whither standards?