370 likes | 376 Views
This study explores the effectiveness of history-based models in structured prediction problems in NLP, comparing them to globally optimized models such as CRFs. It delves into POS tagging, named entity recognition, parsing, and more, showcasing how lookahead methods can enhance performance.
E N D
Learning with lookahead:Can history-based models rival globally optimized models? Yoshimasa Tsuruoka Japan Advanced Institute of Science and Technology (JAIST) Yusuke Miyao National Institute of Informatics (NII) Jun’ichiKazama National Institute of Information and Communications Technology (NICT)
History-based models • Structured prediction problems in NLP • POS tagging, named entity recognition, parsing, … • History-based models • Decompose the structured prediction problem into a series of classification problems • Have been widely used in many NLP tasks • MEMMs (Ratnaparkhi, 1996; McCallum et al., 2000) • Transition-based parsers (Yamada & Matsumoto, 2003; Nivre et al., 2006) • Becoming less popular
Part-of-speech (POS) tagging I saw a dog with eyebrows • Perform multi-class classification at each word • Features are defined on observations (i.e. words) and the POS tags on the left N V D P N V D P N V D P N V D P N V D P N V D P
Dependency parsing I saw a dog with eyebrows saw a dog with eyebrows I
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows I saw
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Lookahead If I move this pawn, then the knight will be captured by that bishop, but then I can … • Playing Chess
POS tagging with lookahead • Consider all possible sequences of future tagging actions to a certain depth I saw a dog with eyebrows N V D P N V D P N V D
POS tagging with lookahead • Consider all possible sequences of future tagging actions to a certain depth I saw a dog with eyebrows N V D P N V D P N V D
POS tagging with lookahead • Consider all possible sequences of future tagging actions to a certain depth I saw a dog with eyebrows N V D P N V D P N V D
POS tagging with lookahead • Consider all possible sequences of future tagging actions to a certain depth I saw a dog with eyebrows N V D P N V D P N V D
POS tagging with lookahead • Consider all possible sequences of future tagging actions to a certain depth I saw a dog with eyebrows N V D P N V D P N V D
Dependency parsing I saw a dog with eyebrows
Dependency parsing I saw a dog with eyebrows
Choosing the best action by search S a1 am a2 . . . . . . . S2 S1 Sm search depth S1* S2* S3*
Decoding cost • Time complexity: O(nm^(D+1)) • n: number of actions to complete the structure • m: average number of possible actions at each state • D: search depth • Time complexity of k-th order CRFs: O(nm^(k+1)) • History-based models with k-depth lookaheadarecomparable to k-th order CRFs in terms of training/testing time
Perceptron learning with Lookahead Correct action Without lookahead a1 a2 am . . . . . . . S1 S2 Sm With lookahead S1* S2* Sm* Guaranteed to converge Linear scoring model
Experiments • Sequence prediction tasks • POS tagging • Text chunking (a.k.a. shallow parsing) • Named entity recognition • Syntactic parsing • Dependency parsing • Compared to first-order CRFs in terms of speed and accuracy
POS tagging • WSJ corpus Accuracy
Training time • WSJ corpus Second
POS tagging (+ tag trigram features) • WSJ corpus Accuracy
Chunking (shallow parsing) • CoNLL 2000 data set F-score
Named entity recognition • BioNLP/NLPBA 2004 data set F-score
Dependency parsing • WSJ corpus (Zhang and Clark, 2008) F-score
Related work • MEMMs + Viterbi • label bias problem (Lafferty et al., 2001) • Learning as search optimization (LaSO) (Daume III and Marcu 2005) • No lookahead • Structured perceptron with beam search (Zhang and Clark, 2008)
Conclusion • Can history-based models rival globally optimized models? • Yes, they can be more accurate than CRFs • The same computational cost as CRFs
Future work • Feature Engineering • Flexible search extension/reduction • Easy-first tagging/parsing • (Goldbergand & Elhadad, 2010) • Max-margin learning