1 / 33

Natural Language Generation 74.793 Research Presentation

Natural Language Generation 74.793 Research Presentation. Presenter Shamima Mithun. Overview. Introduction What is Natural Language Generation (NLG)? Usages of Natural Language Generation When NLG Systems are appropriate? Applications of NLG Example NLG System Architectures for NLG

olympe
Download Presentation

Natural Language Generation 74.793 Research Presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Language Generation74.793 Research Presentation Presenter Shamima Mithun

  2. Overview • Introduction What is Natural Language Generation (NLG)? • Usages of Natural Language Generation When NLG Systems are appropriate? Applications of NLG Example NLG System • Architectures for NLG • How to Evaluate NLG Systems • Conclusions • Demo on ILEX System

  3. What is NLG “Natural language generation is the process of deliberately constructing a natural language text in order to meet specified communicative goals”. [McDonald 1992] from [Dale and Reiter 1999] “Natural Language Generation (NLG) is the process of constructing natural language outputs from non-linguistic inputs”. [Jurafsky and Martin 2000]

  4. What is NLG (contd.) Non-linguistic Input NLG System Output Text • Goal: produces understandable and appropriate texts in English or other human languages • Input: some underlying non-linguistic representation of information, e.g. Meteorological maps, Airline/Railway schedule databases • Output: documents, reports, explanations, help messages, and other kinds of texts • Knowledge sources required: knowledge of language and of the domain [Dale and Reiter 1999]

  5. When NLG Systems are Appropriate? • Text vs. Graphics which medium is better? • Computer Generation vs. Human Authoring is the necessary source data available? is automation economically justified? • NLG vs. Simple String Concatenation how much variation occurs in output texts? [Reiter and Dale 1999]

  6. Applications of NLG • Automated Document Production weather forecasts, summarizing statistical data, answering questions etc. • Information Presentation medical records, weather forecast etc. • Entertainment jokes, stories, poetry etc. • Teaching • Dialog Systems [Rambow et al., 2001]

  7. Applications of NLG (contd.) Two Types of NLG Systems • The system produces a document without human help summaries of statistical data, generating weather forecast etc. • The system helps human authors to create documents customer-service letters, patent claims, technical documents, job descriptions etc. [Reiter and Dale 2000]

  8. NLG System: FoG Reiter and Dale give the description of the FoG System as follows • Function: Produces textual weather reports in English or French • Input: Graphical weather depiction • User: Environment Canada (Canadian Weather Service) • Developer: CoGenTex • Status: Fielded, in operational use since 1992 [Reiter and Dale 1999]

  9. NLG System: FoG Output Input From [Reiter and Dale 1999]

  10. Architectures for NLG

  11. Text Planner Sentence Planner NLG System Architectures: Goal Text Plan Sentence Plan Linguistic Realiser Surface Text From [Jurafsky and Martin 2000] From [Reiter and Dale 1997]

  12. Discourse Planner • This component starts with a communicative goal and makes choices of • Content selection • Discourse Plan • Lexical selection • Micro planning • Aggregation • Referring expressions • It selects the content from the knowledge base and then structures that content appropriately • The resulting discourse plan will specify all the choices made for the entire communication

  13. Content Selection Content Selection:is the process of deciding what information should be communicated in the text • Creating a set of MESSAGES from the underlying data sources • Message-creation process and the form and content of the messages created are highly application-dependent • Generally messages are expressed in some formal language (e.g., Sentence Planning Language) with the notion of ENTITIES, CONCEPTS and RELATIONS in domain

  14. Content Selection (contd.) For Example, specific trains, places and times as entities, the property of being the next train as a concept, and departure as relation between trains and time. Message-id: msg01 Relation: IDENTITY Arguments: arg1: NEXT-TRAIN arg2: CALEDONIAN-EXPRESS The next Train is the Caledonian Express Message-id: msg02 Relation: DEPARTURE Arguments: departure-entity: CALEDONIAN-EXPRESS departure-location: ABERDEEN departure-time: 1000 The Caledonian Express leaves Aberdeen at 10 am

  15. Discourse Plan • Discourse Planning is the task of structuring the messages produced by the Content Selection process • Two predominant mechanisms for building discourse structures: • Text Schemata • Rhetorical Relation

  16. Text Schemata Figure: Knowledge Base Representation for saving a file as a simple procedural hierarchy Figure: A Schema for expressing procedures From [Jurafsky and Martin 2000]

  17. Drawbacks of Text Schemata • Impractical when the text being generated requires more structural variety and richness of expressions. For example, we express certain segments of the text in a different manner or in different order. • No higher-level structure relating the sentence together. For example, if we explained a process in some detail, we might not want to do it again.

  18. Rhetorical Relation • Rhetorical Structure Theory (RST), is a descriptive theory of text organization based on the relationships that hold between parts of the text. Example: i) I love to collect classic automobiles. My favorite car is my 1899 Duryea. ii) I love to collect classic automobiles. My favorite car is my 2001 Toyota. • It designates a central segment of text, called nucleus, and a more peripheral segment, called the satellite • RST relations are defined in terms of the constraints placed on the nucleus, on the satellite, and on the combination of both

  19. Rhetorical Relation (contd.) Name: Expand Purpose Effect: (COMPETENT hearer (DO-ACTION ? action)) Constraints: (AND (c-get-all-substeps ?action ?sub-actions) (NOT (singular-list? ?sub-actions)) Nucleus: (COMPETENT hearer (DO-SEQUENCE ?sub-actions)) Satellites: (((RST_PURPOSE (INFORM s hearer (DO ?action))) Name: Expand Sub-Actions Effect: (COMPETENT hearer (DO-SEQUENCE ? actions)) Constraints: NIL Nucleus: (foreach ?actions (RST-SEQUENCE (COMPETENT hearer (DO-ACTION ?actions)))) Satellites: NIL From [Jurafsky and Martin 2000]

  20. Lexical Selection Lexical selection:the process of deciding which words and phrasesshould be used in order to transform the underlying messages into a readable text. Handling lexical selection requires that the generation system deal with two issues: • It must be able to choose the appropriate lexical item when more than one alternatives exists. • The generation system must be able to choose the appropriate grammatical form for the expression of the concept. [Jurafsky and Martin 2000]

  21. Micro Planning • Aggregation The process of grouping messages together into sentences; not always necessary. Example: "You’ve just compiled and run a simple C program." Without aggregation: "You’ve just complied a simple C program. You’ve just run a simple C program." • Referring Expressions Selecting words and phrases to identify entities (e.g. Caledonian Express or it or this train), generating deictic expressions.

  22. Surface Realization • This component receives the fully specified discourse plan and generates individual sentences as constrained by its lexical and grammar • If the plan specifies multiple-sentence output, the surface realizer is called multiple times • No general consensus as to the level at which the input to the surface realizer should be specified • Approach for Surface Realizations • Functional Unification Grammar

  23. Functional Unification Grammar • Functional Unification Grammar uses unification to manipulate and reason about feature structure • Unify the available grammar with an input specification which is represented with the same feature structure • The unification process then takes the features specified in the input and unify with those in the grammar, producing a full feature structure which can then be linearized to form sentence output

  24. Functional Unification Grammar(contd.) Sample Output: The system will save the document Propositional content specification: a saving action done by a system entity to a document entity Specification of the grammatical form: a future tense assertion and lexical items (“save”, ”system”, and “document”).

  25. From [Jurafsky and Martin 2000]

  26. Functional Unification Grammar (contd.) Input (functional description) CAT S ACTOR [HEAD [LEX SYSTEM] ] PROCESS HEAD [LEX SAVE ] TENSE FUTURE GOAL [HEAD [LEX DOCUMENT] ] From [Jurafsky and Martin 2000]

  27. From [Jurafsky and Martin 2000]

  28. Reusable Surface Realization Packages FUF: is a reusable package to generate English grammar • This package is developed using functional unification structures • If the grammar and the input are specified then the system will construct the syntactically correct sentence output Drafter is a system to support the production of software documentation in English and French. • Drafter [Power et al., 1998] is built using the FUF for surface realization • It uses Rhetorical Structure Theory (RST) based planning for Discourse planning

  29. Evaluating Generation Systems In early work, the quality of the NLG system was assessed by the system builders themselves. If the system gives correct output then the system was judged as success. Currently • Convene a panel of experts to judge the output of the generator in comparison with text produced by human authors • Judge how effective the generated text is at achieving its goal. [Jurafsky and Martin 2000]

  30. Conclusions • Many NLG applications being investigated but all are not successful. However, few systems are in use e.g., FoG • Currently the evaluation process of NLG systems has received much attention • In late 1980s and early 1990s the trend was to construct reusable NLG system e.g., FUF. Now the trend is to port the systems to other languages and platforms

  31. References Jurafsky D., and Martin J.H. 2000. “Speech and Language Processing, An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition”. Prentice Hall. Reiter E., and Dale R., 1997. “Building Applied Natural Language Generation”. Cambridge University Press. Reiter E., and Dale R., 2000. “Building Natural Language Generation Systems”. Cambridge University Press. Bateman J., and Zock M., 2001. “The B-to-Z of Natural Language Generation: an almost complete list.” Oxford Handbook of computational Linguistics. Rambow O., Bangalore S., and Walker M., 2001. Natural Language Generation in Dialog Systems. Reiter E., and Dale R., 1999. Building Natural Language Generation System. www.csd.abdn.ac.uk/~ereiter/papers/eacl99-tut.ppt Power R., Scott D., and Evans R., 1998. What You See Is What You Meant: direct knowledge editing with natural language feedback. Elhadad M., 1993. FUF: the Universal Unifier User Manual Version 5.2 FUF: http://www.cs.bgu.ac.il/research/projects/surge/index.html

  32. Demo on ILEX System

  33. Thanks

More Related