1 / 46

Natural Language Questions for the Web of Data

Natural Language Questions for the Web of Data. Mohamed Yahya 1 , Klaus Berberich 1 , Shady Elbassuoni 2 Maya Ramanath 3 , Volker Tresp 4 , Gerhard Weikum 1 1 Max Planck Institute for Informatics, Germany 2 Qatar Computing Research Institute 3 Dept. of CSE, IIT-Delhi, India

daxia
Download Presentation

Natural Language Questions for the Web of Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Natural Language Questions for the Web of Data Mohamed Yahya1, Klaus Berberich1, Shady Elbassuoni2 Maya Ramanath3, Volker Tresp4, Gerhard Weikum1 1 Max Planck Institute for Informatics, Germany 2 Qatar Computing Research Institute 3 Dept. of CSE, IIT-Delhi, India 4 Siemens AG, Corporate Technology, Munich, Germany EMNLP 2012

  2. Example of question • “Which female actor played in Casablanca and is married to a writer who was born in Rome?”. • Which actress from Casablanca is married to a writer from Rome? • Translation to SPARQL : • ?x hasGender female • ?x isa actor • ?x actedIn Casablanca_(film) • ?x marriedTo ?w • ?w isa writer • ?w bornIn Rome • Characteristics of SPARQL : • Complex query • good results • Difficult for the user • Author wants: automatically create such structured queries by mapping the user’s question into this representation

  3. Translate qNL to qFL • qNL→qFL • qNL : natural language question • qFL : formal language query • KB : knowledge base • qFL(target language) is SPARQL 1.0

  4. Yago2 • YAGO2s is a huge semantic knowledge base, derived from Wikipedia, WordNet and GeoNames.

  5. sample facts from Yago2 • Examples of relations: • type, subclassOf, and actedIn. • Examples of class: • person and film. • Examples of Entities : • Entities are represented in canonical form such as ‘Ingrid_Bergman’ and ‘Casablanca_(film)’. • special type of entities : strings, numbers, and dates.

  6. DEANNA • DEANNA (DEepAnswers for maNy Naturally Asked questions)

  7. question sentence • qNL = (t0, t1, ..., tn). • Phrase = (ti, ti+1, ..., ti+l) ⊆ qNL, 0 ≤ i, 0 ≤ l ≤ n • Phrase focus on entities, classes, and relations • e.g., “Which actress from Casablanca is married to a writer from Rome?” • entities : Casablanca, … • Classes : actresses, … • relations : marriedTo, …

  8. Phrase detection Phrases are detected that potentially correspond to semantic items such as ‘Who’, ‘played in’, ‘movie’ and ‘Casablanca’.

  9. Phrase detection • A detected phrase p is a pair < Toks, l > • Toks: phrase • l : label (l ∈ {concept, relation}) • Pr : the set of all detected relation phrases. • Pc : the set of all detected concept phrases. • null phrase : • null phrase is special type of detected relation phrase • e.g., adjectives, such as ‘Australian movie’,

  10. concept detection • works against a phrase-concept dictionary • phrase-concept dictionary : instances of the means relation in Yago2 • e.g., • {‘Rome’, ‘eternal city’} → Rome • {‘Casablanca’} → Casablanca_(film)

  11. relation detection • rely on a relation detector based on ReVerb (Fader et al., 2011) with additional POS tag patterns, in addition to our own which looks for patterns in dependency parses.

  12. Phrase Mapping

  13. Phrase Mapping • each phrase is mapped to a set of semantic items. • To map concept phrases: • also relies on the phrase-concept dictionary. • To map relation phrases: • rely on a corpus of textual patterns to relation mappings of the form • {‘play’, ‘star in’, ‘act’, ‘leading role’} → actedIn • {‘married’, ‘spouse’, ‘wife’} → marriedTo

  14. Example of Phrase Mapping • ‘played in’ can either refer to the semantic relation actedIn or to playedForTeam and • ‘Casablanca’ can potentially refer to Casablanca_(film) or Casablanca,_Morocco.

  15. Dependency Parsing & Q-Unit Generation

  16. Dependency parsing • Dependency parsing identifies triples of tokens,or triploids • <trel, targ1, targ2>, where trel, targ1, targ2∈qNL • trel: the seed for the relation phrase • targ1, targ2 : seeds for the concept phrase. • there is no attempt to assign subject/object roles to the arguments.

  17. Q-Unit Generation • By combining triploids with detected phrases, we obtain q-units. • q-unit is a triple of sets of phrases, • <{prel∈ Pr}, {parg1∈ Pc}, {parg2∈ Pc}> • trel∈prel , targ1∈ parg1 , and targ2∈ parg2 .

  18. Joint Disambiguation

  19. goal of the disambiguation step • each phrase is assigned to at most one semantic item • resolves the phrase boundary ambiguity • (only nonoverlapping phrases are mapped) • all phrases jointly in one big disambiguation task

  20. resulting subgraph for the disambiguation graph of Figure 3

  21. Disambiguation Graph • Joint disambiguation takes place over a disambiguation graph DG = (V, E), • V = Vs∪Vp∪Vq • E = Esim∪Ecoh∪Eq

  22. Type of vertices • V = Vs∪Vp∪Vq • Vs : the set of semantic items • vs∈ Vs is an s-node • Vp : the set of phrases • vp∈Vp is called a p-node. • Vrp : relation phrases • Vrc : concept phrases • Vq : a set of placeholder nodes for q–units

  23. Type of edges • Esim⊆Vp × Vs • a set of weighted similarity edges • Ecoh⊆ Vs × Vs • a set of weighted coherence edges • Eq⊆ Vq× Vp× d, d ∈ {rel, arg1, arg2} • Called q-edge

  24. Cohsem (Semantic Coherence) • define the semantic coherence (Cohsem) • between two semantic items s1 and s2 as the Jaccard coefficient of their sets of inlinks. • For entities e • InLinks(e) : • the set of Yago2 entities whose corresponding Wikipedia pages link to the entity. • For class c with entities e • InLinks(c) = ∪e∈c Inlinks(e) • For relations r • InLinks(r) = ∪(e1, e2) ∈ r (InLinks(e1) ∩InLinks(e2))

  25. Similarity Weights • For entities • how often a phrase refers to a certain entity in Wikipedia. • For classes • reflects the number of members in a class • For relations • reflects the maximum n-gram similarity between the phrase and any of the relation’s surface forms

  26. Disambiguation Graph Processing • The result of disambiguation is a subgraph of the disambiguation graph, yielding the most coherent mappings. • We employ an ILP to this end.

  27. Definitions (part1)

  28. Definitions (part2)

  29. objective function

  30. Constraints(1~3)

  31. Constraints(4~7)

  32. Constraints(8)

  33. Constraints(9)

  34. resulting subgraph for the disambiguation graph of Figure 3

  35. Query Generation • not assign subject/object roles in triploids and q-units • Example: • “Which singer is married to a singer?” • ?x type singer , ?x marriedTo ?y , and ?y type singer

  36. 5 Evaluation • Datasets • Evaluation Metrics • Results & Discussion

  37. Datasets • QALD-1 • 1st Workshop on Question Answering over Linked Data (QALD-1) • context of the NAGA project • NAGA collection • The NAGA collection is based on linking data from the Yago2 knowledge base • Training set • 23 QALD-1 questions • 43 NAGA questions • Test set • 27 QALD-1 questions • 44 NAGA questions • Get hyperparameters (α, β, γ) in the ILP objective function. • 19 QALD-1 questions in Test set

  38. Evaluation Metrics • author evaluated the output of DEANNA at three stages • 1. after the disambiguation of phrases • 2. after the generation of the SPARQL query • 3. after obtaining answers from the underlying linked-data sources • Judgement • two human assessors who judged whether an output item was good or not • If the two were in disagreement , then a third person resolved the judgment.

  39. disambiguation stage • The task of judges • looked at each q-node/s-node pair, in the context of the question and the underlying data schemas, • determined whether the mapping was correct or not • determined whether any expected mappings were missing.

  40. query-generation stage • The task of judges • Looked at each triple pattern • determined whether the pattern was meaningful for the question or not • whether any expected triple pattern was missing.

  41. query-answering stage • the judges were asked to identify if the result sets for the generated queries are satisfactory.

  42. For a question q and item set s in one of the stages of evaluation • correct(q, s) : the number of correct items in s • ideal(q) : the size of the ideal item set • retrieved(q, s) : the number of retrieved items • define coverage and precision as follows: • cov(q, s) = correct(q, s) / ideal(q) • prec(q, s) = correct(q, s) / retrieved(q, s). • Micro-averaging • aggregates over all assessed items regardless of the questions to which they belong. • Macro-averaging • first aggregates the items for the same question, and then averages the quality measure over all questions.

  43. Conclusions • Author presented a method for translating natural language questions into structured queries. • Although author’s model, in principle, leads to high combinatorial complexity, they observed that the Gurobi solver could handle they judiciously designed ILP very efficiently. • Author’s experimental studies showed very high precision and good coverage of the query translation, and good results in the actual question answers.

  44. qNLfocus on entities, classes, and relations • Ex: “Which actress from Casablanca is married to a writer from Rome?” • entities : Casablanca, … • Classes : actresses, … • relations : marriedTo, …

More Related