1 / 18

Machine Learning of Temporal Relations

Machine Learning of Temporal Relations. Inderjeet Mani, Marc Verhagen , Ben Wellner , Chong Min Lee and James Pustejovsky. Motivation. What is a temporal relation? Newspaper texts, narratives etc. describe events that occur in time and specify temporal location and order of these events.

hart
Download Presentation

Machine Learning of Temporal Relations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning of Temporal Relations Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee and James Pustejovsky

  2. Motivation • What is a temporal relation? • Newspaper texts, narratives etc. describe events that occur in time and specify temporal location and order of these events. • Capability to identify these events and locate them in time is crucial • Potential applications? • Relative order of events in multi-document summarization • Question Answering

  3. Overview • Investigates a machine learning approach for temporally ordering and anchoring events in natural language text • Describes annotation scheme and challenges • Temporal reasoning as an oversampling method • Comparison with baselines

  4. Introduction • Infer temporal ordering of events from temporal adverbials, tense, aspect, rhetorical relations and background knowledge Example: • Max stood up. John greeted him. • Follows narrative convention of events (b) Max entered the room. He had drunk a lot of wine. • order of events overridden by a discourse relation (d) The company announcedTuesday that third-quarter sales had fallen. • use of temporal adverbial

  5. Annotation Scheme: TimeML • Annotation scheme for markup of events, times, and their temporal relations • TimeML Tags: • EVENT: Annotates elements in a text that mark the semantic events described by it. flags tensed verbs, adjectives, and nominals • INSTANCE: Information about particular instance of an event - part of speech, tense, aspect, modality, and polarity • TIMEX3: Time expressions are flagged • Temporal anchor facilitates the use of temporal functions to calculate the value of an underspecified temporal expression • E.g: an article might include a document creation time such as "January 3, 2006". Later in the article, the temporal expression "today" may occur. By anchoring the TIMEX3 for "today" to the document creation time, we can determine the exact value of the TIMEX3.

  6. TimeML • SIGNAL: used to annotate temporal function words such as "after", "during", and "when". These signals are then used in the representation of a temporal relationship. • TLINK: links tagged events to other events and/or times • ALINK: An aspectual connection between two event instances is represented with ALINK • SLINK: used to capture subordination relationships that involve event modality, evidentially, and factuality

  7. Sample Annotation The company <EVENT eventID="e1" class="reporting" tense="past" aspect="none">announced</EVENT> <TIMEX3 tid="t2" type="DATE" temporalFunction="false" value="1998-01-08">Tuesday </TIMEX3> that third-quarter sales <EVENT eventID="e2" class="occurrence" tense="past" aspect="perfect"> had fallen</ EVENT>.<TLINK eventID="e1" relatedToEventID="e2" relType="AFTER"/><TLINK eventID="e1" relatedToTimeID="t2" relType="IS_INCLUDED"/>

  8. Temporal Relation Types • 6 temporal relations: • SIMULTANEOUS • BEFORE • IBEFORE • BEGINS • ENDS • INCLUDES Allen’s relations:

  9. Machine learning approach • Lack of TLINK coverage in human annotation could be helped by preprocessing, provided it meets some threshold of accuracy • Machine learning approach • Challenges • noise in corpus • sparseness of links TLink Class distributions in OTC corpus

  10. Initial Learning • Once a tagger has tagged the events and times, the first • task (A) is to link events and/or times, and the • second task (B) is to label the links. TLINK inference - classification problem: given an Ordered pair of elements X and Y, where X and Y are events or times which the human has related temporally via a TLINK, the classifier has to assign a label in RelTypes.

  11. Initial Learning • Feature vectors: TimeML features, with the TLINK class being the vector’s class feature • For each event in an event-ordering pair: • the event-class, aspect, modality, tense and negation (all nominal features); event string, and signal • a preposition/ adverb, e.g., reported on Tuesday), which are string features, and contextual features indicating whether the same tense and same aspect are true of both elements in the event pair. • For event-time links, • event and signal features along with TIMEX3 time features. • Maximum Entropy classifier : 77%

  12. Temporal Reasoning • Transitive closure – SputLink : constraint propagation • algorithm • Takes known temporal relations in a text and derives new • implied relations from them, in effect making explicit what • was implicit.

  13. ML Results using closed and unclosed data

  14. Improvement by Temporal Closure • Closure effectively creates a new classification problem with many more instances providing more data to train on • The class distribution is further skewed which results in a higher majority class baseline • Closure produces additional data in such a way as to increase the frequencies and statistical power of existing features in the unclosed data, as opposed to adding new features

  15. Baselines • Majority class statistical baseline • Hand-coded rules (GTag) – intuition/ heuristic based • Hybrid baseline based on hand-coded rules (GTag) expanded with Google-induced rules (VerbOcean) • A machine learning version that learns from imperfect annotation produced by (2)

  16. Results (Baselines)

  17. Conclusions • Semantic reasoning (logical axioms for temporal closure) extremely valuable in addressing data sparseness. • Only rule based intuitions are prone to incompleteness and hard to tune without access to distributions found in empirical data. • Lexical rules from Google-derived VerbOcean, are relevant, but too specific to apply more than a few times

  18. Future Work • Acquire confidence weights for at least some of the intuitive rules in GTag from Google searches, so that we have a level field for integrating confidence weights from the fairly general GTag rules and the fairly specific VerbOcean-like lexical rules. • Further, the GTag and VerbOcean rules could be incorporated as features for machine learning, along with features from automatic preprocessing.

More Related