150 likes | 167 Views
This project focuses on aligning events extracted from textual football match reports with events recognized in video coverage of the same match. It also aims to extend the event alignment work towards cross-media feature extraction, aligning low-level image/video features with events from aligned textual and semi-structured data.
E N D
WP5.4 - Introduction • Knowledge Extraction from Complementary Sources • This activity is concerned with augmenting the semantic multimedia metadata basis by analysis of complementary textual, speech and semi-structured data • Focus in first 12 months • Joint work between DFKI, UEP and DCU on aligning event extraction from textual football match reports with event recognition in video coverage of the same match • Focus in following 12 months • Joint work between DFKI, UEP and DCU on the extension of the event alignment work towards cross-media feature extraction (aligning low-level image/video features with events extracted in aligned textual and semi-structured data) • Joint work between DFKI, UEP, TUB and GET (cross-WP cooperation with WP3.3) on analyzing textual metadata in primary sources (OCR applied to text detected in images).
Text-Video Mapping in the Football Domain Alignment of extracted events from unstructured textual data and from events that are provided by the semi-structured tabular data in the SmartWeb corpus (DFKI) with events that were detected by the video analysis results (DCU). • Cooperation: DFKI, UEP, DCU • Resources: • DFKI: SmartWeb Data Set (textual and tabular match reports) • DFKI/UEP: Additional minute-by-minute textual match reports (‚tickers‘) from other web resources • DCU: Video Detectors (Crowd image detector, Speech-Band Audio Activity, On-Screen Graphics Tracking, Motion activity measure, Field Line orientation, Close-up) • Textual and semi-structured data (tabular, XML files) are exploited as background knowledge in filtering the video analysis results and will possibly help in further improving the corresponding video analysis algorithms
Resources • The SmartWeb Data Set as provided by DFKI is an experimental data set for ontology-based information extraction and ontology learning from text that has been compiled for the SmartWeb project. • The data set consists of: • An ontology on football (soccer) that is integrated with foundational (DOLCE), general (SUMO) and task-specific (discourse, navigation) ontologies. • A corpus of semi-structured and textual match reports (German and English documents) that are derived from freely available web sources. The bilingual documents are not translations, but are aligned on the level of a particular match (i.e. they are about the same match). • A knowledge base of events and entities in the world cup domain that have been automatically extracted from the German documents. • For the purposes of the experiment described here we were mostly interested in the events that are described by the semi-structured data.
DCU: Video Analysis Data • Framework for event detection in broadcast video of multiple different field sports as provided by DCU • Video detectors used by DCU • Crowd image detector • Speech-Band Audio Activity • On-Screen Graphics Tracking • Motion activity measure • Field Line orientation • Close-up
crowd confidence audio Visual_motion
DFKI/UEP: Extraction of Tickers Minute-by-minute reports from different Web resources Ligalive.de Ard.de bild.de
Information Extraction from Text Information Extraction with DFKI Tool „SProUT“ Shallow Processing with Unification and Typed Feature Structures(SProUT) tool for multilingual shallow text processing and information extraction SProUT java web service that takes the minute-by-minute reports as an input, parses them and extracts a new XML file for each minute of a particular match
Aligning and Aggregation ofTextual Events Events alignment from various tickers Information Extraction Results (SProUT) alignment Data aggregation for later use Example: minute 40 Tabular Reports VIDEO – TEXTUAL DATA TIME ALIGNMENT Minute-by-minute reports CROSS-MEDIA FEATURE EXTRACTION + video event detection data (features) from DCU
Match vs Video Time Freekick evaluation Possible OCR on video Time differences tracking
Cross-media Features • Purpose: Cross-Media features describe information that occurs in textual/semi-structured data as well as in video data and can therefore be used as additional support in video analysis. • Goal: Use video detectors aligned with events extracted from text/semi-structured data as cross—media features • Example:
Summary • Extracted: 1200 events, 45 event-types • After alignment: 850 events describing five matches from World Cup 2006 Final • 170 events per game on average • Cross-media descriptors for every event-type
Future plans In WP5.4.1 continue work on mapping between results of video analysis and complementary resource analysis in the following way: Use extracted image descriptors from training data (video + aligned text extraction) for the classification of fine-grained events in test data (i.e. other videos) -- all based on minute-by-minute alignment Cooperate with TUB in Video OCR to help time video-text alignment WP5.4.2 Images and text as mutually complementary resources WP5.4.3: Image retrieval based on enhanced query processing and complementary resource analysis
Mining over Football Match Data: Seeking Associations among Explicit and Implicit Events • Apart from identifying individual events, it might be useful to find out about general statistical dependencies (associations) among types of events • Initial experiments carried out on a single type of resource – structured data • In the future, events extracted from text and video could be considered as well • Use of LISp-Miner tool (UEP) • Data mining procedure 4ft-Miner mines for various types of association rules and conditional association rules • Potential application:Discovering new relationships to be inserted into the domain ontology or knowledge base,