470 likes | 567 Views
P ASCAL C HALLENGE ON I NFORMATION E XTRACTION & M ACHINE L EARNING. Neil Ireson Local Challenge Coordinator Web Intelligent Group Department of Computer Science University of Sheffield. Organisers. Sheffield – Fabio Ciravegna UCD Dublin – Nicholas Kushmerick
E N D
PASCAL CHALLENGE ON INFORMATION EXTRACTION & MACHINE LEARNING Neil Ireson Local Challenge Coordinator Web Intelligent Group Department of Computer Science University of Sheffield
Organisers • Sheffield – Fabio Ciravegna • UCD Dublin – Nicholas Kushmerick • ITC-IRST – Alberto Lavelli • University of Illinois – Mary-Elaine Califf • FairIsaac – Dayne Freitag Website • http://tyne.shef.ac.uk/Pascal
Outline • Challenge Goals • Data • Tasks • Participants • Results on Each Task • Conclusion
Goal : Provide a testbed for comparative evaluation of ML-based IE • Standardised data • Partitioning • Same set of features • Corpus preprocessed using Gate • No features allowed other than the ones provided • Explicit Tasks • Standard Evaluation • Provided independently by a server • For future use • Available for further test with same or new systems • Possible to publish and new corpora or tasks
Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Set0 Set1 Set2 Set3 Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
9 9 9 9 7 7 7 7 3 3 3 3 5 5 5 5 8 8 8 8 1 1 1 1 6 6 6 6 2 2 2 2 4 4 4 4 0 0 0 0 Set0 Set1 Set2 Set3 Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Enrich Data 1 250 Workshop CFP Enrich Data 2 250 Conference CFP WWW Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Preprocessing • GATE • Tokenisation • Part-Of-Speech • Named-Entities • Date, Location, Person, Number, Money
Annotation Exercise Annotators Christopher Brewster Sam Chapman Fabio Ciravegna Claudio Giuliano Jose Iria Ashred Khan Vita Lanfranchi Alberto Lavelli Barry Norton • 4+ months • Initial consultation • 40 documents – 2 annotators • Second consultation • 100 documents – 4 annotators • Determine annotation disagreement • Full annotation – 10 annotators
Evaluation Tasks • Task1 - ML for IE:Annotating implicit information • 4-fold cross-validation on 400 training documents • Final Test on 200 unseen test documents • Task2a - Learning Curve: • Effect of increasing amounts of training data on learning • Task2b - Active learning: Learning to select documents • Given seed documents select the documents to add to training set • Task3a - Enriched Data: • Same as Task1 but can use the 500 unannotated documents • Task3b - Enriched & WWW Data: • Same as Task1 but can use all available unannotated documents
Evaluation • Precision/Recall/F1Measure • MUC Scorer • Automatic Evaluation Server • Exact matching • Extract every slot occurrence
Task1 Information Extraction with all the available data
Task 2a Learning Curve
Task 2b Active Learning
Active Learning (1) 400 Potential Training Documents 200 Test Documents
Test Active Learning (1) 360 Potential Training Documents Select 200 Test Documents 40 Selected Training Document
Extract Active Learning (2) 360 Potential Training Documents 200 Test Documents Subset0 40 Training Documents
Test Active Learning (2) 320 Potential Training Documents Select 200 Test Documents Subset0 40 Training Documents 40 Selected Training Documents
Extract Active Learning (3) 320 Potential Training Documents 200 Test Documents Subset0,1 80 Training Documents
Test Active Learning (3) 280 Potential Training Documents Select 200 Test Documents Subset0,1 80 Training Documents 40 Selected Training Documents
Task2b: Active Learning • Amilcare • Maximum divergence from expected number of tags. • Hachey • Maximum divergence between two classifiers built on different feature sets. • Yaoyong (Gram-Schmidt) • Maximum divergence between example subset.
Task2b: Active LearningIncreased FMeasure over random selection
Task 3 Semi-supervised learning (not significant participation)
Conclusions (Task1) • Top three (4) systems use different algorithms • Amilcare : Rule Induction • Yaoyong : SVM • Stanford : CRF • Hachey : HMM
Conclusions (Task1: Test Corpus) • Same algorithms (SVM) produced different results
Conclusions (Task1: 4-fold Corpus) • Same algorithms (SVM) produced different results
Conclusions (Task1) • Task 1 • Large variation on slot performance • Good performance on: • “Important” dates and Workshop homepage • Acronyms (for Amilcare) • Poor performance on: • Workshop name and location • Conference name and homepage
Conclusion (Task2 & Task3) • Task 2a: Learning Curve • Systems’ performance is largely as expected • Task 2b: Active Learning • Two approaches, Amilcare and Hachey, showed benefits • Task 3: Enrich Data • Not sufficient participation to evaluate use of enrich data
Future Work • Performance differences: • Systems: what determines good/bad performance • Slots: different systems were better/worse at identifying different slots • Combine approaches • Active Learning • Enrich data • Overcoming the need for annotated data • Extensions • Data: Use different data sets and other features, using (HTML) structured data • Tasks: Relation extraction