1 / 59

Natural Language Generation

Natural Language Generation. Saurabh Chanderiya (07005004) Abhimanyu Dhamija (07005024) E K Venkatesh (07005031) G Hrudil (07005032) B Vinod Kumar (07d05018) Guide: Prof. Pushpak Bhattacharya. Outline. What is Natural Language Generation? Motivation Stages in NLG

tale
Download Presentation

Natural Language Generation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Language Generation SaurabhChanderiya (07005004) AbhimanyuDhamija (07005024) E K Venkatesh (07005031) G Hrudil (07005032) B Vinod Kumar (07d05018) Guide: Prof. Pushpak Bhattacharya

  2. Outline • What is Natural Language Generation? • Motivation • Stages in NLG • Applications of NLG • Evaluation Techniques • Conclusion

  3. What is Natural Language Generation? • Natural Language Generation (NLG) is the subfield of artificial intelligence and computational linguistics that focuses on computer systems that can produce understandable texts in English or other human languages [Reiter and Dale, 2000]

  4. What is Natural Language Generation? • Convert computer based representation into natural language representation (opposite of NLU) • Machine representation comprises of some form of computerized data • Examples: • A database of daily temperature values in a city • An ontology • A collection of fairy tales NLG Data/ Machine-representation Natural Language Text NLU

  5. Key Elements in NL Generation • Many choices available – an NLG system needs to choose the most appropriate one • Example: Denoting value-change • “the temperature rose” – increase in value • “the temperature plummeted” – drastic increase in value • “the rain got heavier” – again increase in value, but different context [Wikipedia] • Meeting the communication goals – so that the generated text is understandable to the target reader

  6. Motivating Example1 • Suppose you are asked to write an article on IIT Bombay. • How do you proceed? • Step 1: What all should I write about? How should I organize it? • History, Students, Professors, Gymkhana, Mood Indigo … • Start with description of gymkhana or history … • Step 2: What should my style be? • Editorial, Prose, Poetry … • Simple words … • Step 3: Pen it down

  7. Motivating Example2 • We have just identified the key stages in Natural Language Generation

  8. Motivating Example3 • We have just identified the key stages in Natural Language Generation • Step 1: What all should I write about? How should I organize it? • History, Students, Professors, Gymkhana, Mood Indigo … • Start with description of gymkhana or history … TEXT PLANNING (Content Determination and Document Structuring)

  9. Motivating Example4 • We have just identified the key stages in Natural Language Generation • Step 2: What should my style be? • Editorial, Prose, Poetry … • Simple words … MICROPLANNING (Lexical Choice, Referring Expression Generation, Aggregation)

  10. Motivating Example5 • We have just identified the key stages in Natural Language Generation • Step 3: Pen it down REALIZATION

  11. NLG Systems Architecture Control Data Output Data Input Data Document Planning Micro -planning Realization Content Determination Lexical Choice Referring Expressions Document Structuring Aggregation

  12. Stages in NLG • The following different stages of Natural Language Generation can be identified: • Content Determination • Document Structuring • Lexical Choice • Referring Expression Generation • Aggregation • Realization • Each of these is considered in detail in the next few slides

  13. Content Determination1 • Deciding what information to mention in the text • Example: [Wikipedia] • NLG system to summarize information about sick babies has the following information: • The baby is being given morphine via an IV drop • The baby's heart rate shows bradycardias (temporary drops) • The baby's temperature is normal • The baby is crying

  14. Content Determination2 • Factors affecting the decision could be • Communicative goal – the purpose of the text and the reader • A diagnosing doctor would be interested in heart rate while a parent would want to know if the baby is crying or not • Size and level of detail • A formal report about the patient vs. an SMS to the doctor • How unusual the information is • Is it important to mention that the baby’s temperature is normal?

  15. Content Determination3 • Techniques employed • Schemas – predefined templates which explicitly specify what information is to be included • Based on Rhetorical Predicates • Rhetorical predicates specify the “role” that is played by each utterance in the text • Example: • Mary has a pink coat • Attributive • Other rhetorical predicates: • Particular illustration, evidence, inference etc. [McKeown, 1985]

  16. Content Determination4 • Example Schema using Rhetorical Predicates • Identification Schema (for providing definitions) [McKeown, 1985] Identification (class & attribute) Attributive Particular Illustration • Sample text generated from this schema could be Mumbai is an important economic region in Maharashtra.There are many textile mills in Mumbai.Bombay Dyeing is among the noteworthy textile mills.

  17. Content Determination5 • Explicit Reasoning Approaches • Example: Plot generation using case based reasoning • [B. D´ıaz-Agudo et. al, 2004] • Case based reasoning characterized by: retrieve, reuse, revise, retain • Build cases from a set of stories – similar to identifying features that constitute the story • Ontology for the fairy tale world • Accept query from user regarding features of the new plot to be generated

  18. Content Determination6 • Example: Plot generation using case based reasoning (contd.) • Retrieve similar case – similarity calculated on the basis of distance in the ontology • Resolve dependencies – ask user for further input if needed • Generate plot

  19. Content Determination7 • Sample run: • Query: “princess, murder, interdiction, interdiction violated, competition, test of hero” • Story number 113 (Swan Geese) returned based on similarity • Perform substitutions • Generate plot

  20. Document Structuring1 • Decide the order and grouping of sentences in a generated text • Example: • John went to the shop. • John bought an apple. • Now consider: • John bought an apple. • John went to the shop. • The first case seems more coherent than the second. Thus, sentence structuring is important.

  21. Document Structuring2 • Algorithms • Schema based approach • Corpus based Approach [M Lapata, 2003] P(S1 … Sn) = P(S1) * P(S2|S1) * P(S3|S1,S2) * … *P(Sn|S1 … Sn-1) (assuming dependence only on previous sentence) P(S1 … Sn) = P(S1) * P(S2|S1) * P(S3|S2) * … * P(Sn|Sn-1) (using features to represent sentences) P(S1 … Sn) = P((a<i,1>, a<i,2>, … , a<i,n>) | (a<i-1,1>, … a<i-1,m>)) (assuming independence of features and approximating P(S<i>|S<i-1>) from the Cartesian product S<i> x S<i-1>) P(S<i>|S<i-1>) = π {P(a<i,j>|a<i-1,k>)} where jεS<i> and k ε S<i-1> (estimate prob. using counts, construct directed weighted graph (sentences as nodes and probabilities as edge weights) and obtain approximate solution)

  22. Aggregation1 • Aggregation is a subtask of Natural language generation, which involves merging syntactic constituents (such as sentences and phrases) together • Example: • John went to the shop. John bought an apple. • “John went to the shop and bought an apple.” • Could be syntactic or conceptual • Example of conceptual: replacing “Saturday and Sunday” by “weekend” • Aggregation algorithms must do two things: • Decide when two constituents should be aggregated • Decide how two constituents should be aggregated, and create the aggregated structure

  23. Post-editing2 • Identity between different word-groups • Lemma identity: two different words belong to the same inflectional paradigm • Form identity: two words have the same spelling/ sound and are lemma-identical • Co-referentiality: two words/constituents denote the same entity or entities in the external context, i.e. have the same reference • [Karin Harbusch et. al, 2009]

  24. [Karin Harbusch et. al, 2009]

  25. Lexical choice1 • Lexical choice involves choosing the content words (nouns, verbs, adjectives, adverbs) in a generated text. • The simplest type of lexical choice involves mapping a domain concept to a word. • Lexical choice modules must be informed by linguistic knowledge of how the system's input data maps onto words. This is a question of semantics, but it is also influenced by syntactic factors and pragmatic factors. • 3 factors to look for: • Genre • People perceive different words differently • How language relates to the non-linguistic world

  26. Humans’ perception about words3 • [Rohit Parikh, 1994] • By evening: has different meaning • Different dialects • Choosing between near-synonymous words • It has been suggested that utility theory be applied to word choice. In other words, if we know (1) the probability of a word’s being correctly interpreted or misinterpreted and (2) the benefit to the user of correct interpretation and the cost of mis-interpretation, then we can compute an overall utility to the user of using the word.

  27. Referring expression generation1 • This the second last stage in natural language generation • This involves creating referring expressions (noun phrases) that identify specific entities to the reader • Example: • He told the tourist that rain was expected tonight in Southern Scotland. • He, the tourist, tonight and Southern Scotland are reference expressions

  28. Criteria for good referents2 • Ideally, a good referring expression should satisfy a number of criteria: • Referential success: It should unambiguously identify the referent to the reader. • Ease of comprehension: The reader should be able to quickly read and understand it. • Computational complexity: The generation algorithm should be fast • No false inferences: The expression should confuse or mislead the reader by suggesting false implications or other pragmatic inferences. [Wikipedia]

  29. Kinds of Referring Expressions3 • Proper noun-noun • Definite Noun Phrases • Spatial • Temporal Reference • Different Algorithmic models • Graph-Based Generation of Referring Expressions [Krahmer, et. al. 2003] • Centering theory uses ranking [Poesio et. al, 2004 ] • Generating Approximate Geographic Descriptions [Turner et. al, 2009]

  30. Realization1 • Realization deals with creating the actual text from the abstract representation • Realization involves three kinds of processing: • Syntactic realization – decide order of components, add function words etc. • Example: in English, Subject usually precedes the verb • Morphological realization – compute inflected forms • Example: plural(woman) == women • Orthographic realization • Capitalization of first letter, punctuations etc. • Realization systems: simplenlg, kpml etc.

  31. Realization2 • SIMPLENLG • a simple NLG library for Java for generating grammatically correct English sentences • Sample code: SPhraseSpec p = nlgFactory.createClause();p.setSubject("Mary");p.setVerb("chase");p.setObject("the monkey"); String output2 = realiser.realiseSentence(p);System.out.println(output2); • Output: “Mary chases the monkey” [http://code.google.com/p/simplenlg/wiki/Section1]

  32. Applications of NLG1 • Present information in more convenient way • Airline schedule database • Accounting spreadsheet • Automating document production • Doctor writing discharge summaries • Programmer writing code documentation, logic description etc. • In many contexts, human intervention is required to create texts

  33. Application of NLG with human intervention2 • NLG system is used to produce an initial draft of a document which can be further edited by human author • E.g. • Weather Reporter, which helps meteorologists compose weather forecasts • DRAFTER, which helps technical authors write software manuals • AlethGen, which helps customer-service representatives write response letters to customers

  34. Application of NLG without human intervention3 • Some NLG systems have been developed with the aim of operating as standalone systems. • E.g. • Model Explainer, which generates textual descriptions of classes in an object-oriented software system • LFS, which summarizes statistical data for the general public • PIGLET, which gives hospital patients explanations of information in the patient records.

  35. Weather Reporter4 • Provide retrospective reports of the weather over periods whose duration is one month • Takes large set of numerical data • Produces short texts • E.g. text produced by Weather reporter • The month was cooler and drier than average, with the average number of rain days. The total rain for the year so far is well below average. There was rain on every day for eight days from the 11th to the 18th

  36. Weather Reporter5

  37. Weather Reporter6 • Data shown is real data collected automatically by meteorological data gathering equipment • Weather Reporter design is based on real input data and a real corpus of human-written texts

  38. Weather Reporter • Example, using the historical data for 1-July-2005, the software produces • Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country. However, in Northern areas, pollen levels will be moderate with values of 4. • In contrast, the actual forecast (written by a human meteorologist) from this data was • Pollen counts are expected to remain high at level 6 over most of Scotland, and even level 7 in the south east. The only relief is in the Northern Isles and far northeast of mainland Scotland with medium levels of pollen count.

  39. Model Explainer7 • Generates textual description of information in models of object-oriented software.

  40. Model Explainer8 • O-O models are usually depicted graphically • Model Explainer is useful as certain kind of information is better communicated textually • E.g. • Via Model Explainer it is clear that a section must be taught by exactly one professor • Clear data especially for people who are not familiar with the notation used in the graphical depiction

  41. Model Explainer9

  42. Model Explainer10 • It also express relations from the object model in a variety of linguistic contexts • E.g. “teaches” • A professor teaches a course • A section must be taught by a professor • Professor smith does not teach any sections

  43. Task-Based Evaluation • Task-based evaluations measure the impact of generated texts on end users and typically involve techniques from an application domain such as medicine. • For example, a system which generates summaries of medical data can be evaluated by giving these summaries to doctors, and assessing whether the summaries helps doctors make better decisions.

  44. Evaluations Based on Human Ratings and Judgments • Another way of evaluating an NLG system is to ask human subjects to rate generated texts on an n-point rating scale

  45. Unigram Precision Candidate: the thethethethethethe. Reference 1: The cat is on the mat. Reference 2: There is a cat on the mat. • Unigram Precision of above candidate is 7 as 7 candidate words(“the”) occur in reference1. • But candidate is not appropriate.

  46. Modified Unigram precision • Count the maximum number of times a word occurs in any single reference translation. • Clip the total count of each candidate word by its maximum reference count • Add these clipped counts • Divide by the total number of candidate words.

  47. Modified unigram precision Candidate: the thethethethethethe. Reference 1: The cat is on the mat. Reference 2: There is a cat on the mat. • Max count of “the” in ref1 is 2. • Total # of candidate words is 7 • So, Modified Unigram Precision = 2/7

  48. Modified n-gram precision • All candidate n-gram counts and their corresponding maximum reference counts are collected. • The candidate counts are clipped by their corresponding reference maximum value. • Add them. • Divide by the total number of candidate n-grams

  49. Modified n-gram precision • A translation using the same words (1-grams) as in the references tends to satisfy adequacy. • The longer n-gram matches account for fluency.

  50. Modified n-gram precision Candidate 1: It is a guide to action which ensures that the military always obey the commands the party. Candidate 2: It is to insure the troops forever hearing the activity guidebook that party direct. Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of the Party. Reference 3: It is the practical guide for the army always to heed directions of the party • Modified Bigram Precision of candidate1 = 8/17 • Modified Bigram Precision of candidate2 = 1/13

More Related