340 likes | 479 Views
P ASCAL C HALLENGE ON E VALUATING M ACHINE L EARNING FOR I NFORMATION E XTRACTION. Neil Ireson Local Challenge Coordinator Web Intelligent Group Department of Computer Science University of Sheffield UK. Organisers. Sheffield – Fabio Ciravegna UCD Dublin – Nicholas Kushmerick
E N D
PASCAL CHALLENGE ON EVALUATING MACHINE LEARNING FOR INFORMATION EXTRACTION Neil Ireson Local Challenge Coordinator Web Intelligent Group Department of Computer Science University of Sheffield UK
Organisers • Sheffield – Fabio Ciravegna • UCD Dublin – Nicholas Kushmerick • ITC-IRST – Alberto Lavelli • University of Illinois – Mary-Elaine Califf • FairIsaac – Dayne Freitag
Outline • Challenge Goals • Data • Tasks • Participants • Experimental Results • Conclusions
Goal : Provide a testbed for comparative evaluation of ML-based IE • Standardisation • Data • Partitioning • Same set of features • Corpus preprocessed using Gate • No features allowed other than the ones provided • Explicit Tasks • Evaluation Metrics • For future use • Available for further test with same or new systems • Possible to publish and new corpora or tasks
Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Set0 Set1 Set2 Set3 Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
9 9 9 9 7 7 7 7 3 3 3 3 5 5 5 5 8 8 8 8 1 1 1 1 6 6 6 6 2 2 2 2 4 4 4 4 0 0 0 0 Set0 Set1 Set2 Set3 Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Unannotated Data 1 250 Workshop CFP UnannotatedData 2 250 Conference CFP WWW Data (Workshop CFP) 2005 Testing Data 200 Workshop CFP 2000 Training Data 400 Workshop CFP 1993
Preprocessing • GATE • Tokenisation • Part-Of-Speech • Named-Entities • Date, Location, Person, Number, Money
Evaluation Tasks • Task1 - ML for IE:Annotating implicit information • 4-fold cross-validation on 400 training documents • Final Test on 200 unseen test documents • Task2a - Learning Curve: • Effect of increasing amounts of training data on learning • Task2b - Active learning: Learning to select documents • Given seed documents select the documents to add to training set • Task3a – Semi-supervised Learning: Given data • Same as Task1 but can use the 500 unannotated documents • Task3b - Semi-supervised Learning: Any Data • Same as Task1 but can use all available unannotated documents
Evaluation • Precision/Recall/F1Measure • MUC Scorer • Automatic Evaluation Server • Exact matching • Extract every slot occurrence
Task1 Information Extraction with all the available data
Task 2a Learning Curve
Task 2b Active Learning
Task2b: Active Learning • Amilcare • Maximum divergence from expected number of tags. • Hachey • Maximum divergence between two classifiers built on different feature sets. • Yaoyong (Gram-Schmidt) • Maximum divergence between example subset.
Task2b: Active LearningIncreased FMeasure over random selection
Task 3 Semi-supervised learning (not significant participation)
Conclusions (Task1) • Top three (4) systems use different algorithms • Rule Induction, SVM, CRF & HMM • Same algorithms (SVM) produced different results • Brittle Performance • Large variation on slot performance • Post-processing
Conclusion (Task2 & Task3) • Task 2a: Learning Curve • Systems’ performance is largely as expected • Task 2b: Active Learning • Two approaches, Amilcare and Hachey, showed benefits • Task 3: Semi-supervised Learning • Not sufficient participation to evaluate use of enrich data
Future Work • Performance differences: • Systems: what determines good/bad performance • Slots: different systems were better/worse at identifying different slots • Combine approaches • Active Learning • Semi-supervised Learning • Overcoming the need for annotated data • Extensions • Data: Use different data sets and other features, using (HTML) structured data • Tasks: Relation extraction
Thank You http://tyne.shef.ac.uk/Pascal