160 likes | 203 Views
Building Dynamic Knowledge Graphs From Text Using Machine Reading Comprehension. Rajarshi Das, Tsendsuren Munkhdalai, Xingdi Yuan, Adam Trischler, Andrew McCallum (ICLR’19) Presented by: Shen Yan. Automatically Building Knowledge Graphs. Raw information ⇒ Structured form: Nodes (entities)
E N D
Building Dynamic Knowledge Graphs From Text Using Machine Reading Comprehension Rajarshi Das, Tsendsuren Munkhdalai, Xingdi Yuan, Adam Trischler, Andrew McCallum (ICLR’19) Presented by: Shen Yan
Automatically Building Knowledge Graphs • Raw information ⇒ Structured form: • Nodes (entities) • Edges (relationships) • Track the changing relations among entities • Make implicit information more explicit
KG-MRC Knowledge Graph - Machine Reading Comprehension • A neural machine-reading model that constructs dynamic knowledge graphs from text • Focus on tracking theLocations of participant entities • Bipartite graph: • Two sets of nodes: entities () and locations ()
KG-MRC Pipeline • At each time step , reading prefixes of the paragraph up to and including sentence • Engage Machine reading comprehension (MRC) model to query for the state of each participant entity (e.g., “Where is E located?”). MRC model returns an answer span describing the entities’ current location at , encoding the text span as the vector • Conditioning on the span vector , the model constructs the graph by updating from the previous time step
Soft Co-reference • Across time steps: • Within each time step: : incoming location vector : intermediate node representation : matrix of location node representations : matrix of intermediate node representations : co-reference adjacency matrix, to track the related nodes within t
Graph Update • Compose all connected nodes with their history summary using an LSTM unit • Update node information • Perform a co-reference pooling operation for location node representations • Recurrentgraph:StackLsuchlayerstopropagatenodeinformationalongthegraph’sedges.
Experiments & Evaluation Procedural text comprehension tasks • Task 1 Sentence-level evaluation (Dalvi et al. 2018) • Answer 3 categories of questions • Cat 1: Is E created, (destroyed, ,moved) in the process? • Cat 2: When (step #) is E created, (destroyed, moved)? • Cat 3: Where is E created, (destroyed, moved from/to)? • Task 2 Document-level evaluation (Tandon et al. 2018) • Answer 4 categories of questions • Cat 1: What are the inputs to the process? • Cat 2: What are the outputs of the process? • Cat 3: What conversions occur, when and where? • Cat 4: What movements occur, when and where?
Experiments & Evaluation Procedural text comprehension tasks • PROPARA dataset: procedural text about scientific processes.
Experiments & Evaluation Procedural text comprehension tasks • PROPARA dataset
Experiments & Evaluation 2. Ablation study • Removing the soft-coreference disambiguation within the steps → 1% performance drop • Removing the soft-coreferenceacross time steps → more significant performance drop • ReplacetherecurrentgraphmodulewithLSTM→lacktheinformationpropagationacrossgraphnodes
Experiments & Evaluation 3. Commonsense constraints • Commonsense constraints:(Tandonetal.2018) • An entity must exist before it can be moved or destroyed • An entity cannot be created if it already exists • An entity cannot change until it is mentioned in the paragraph
Experiments & Evaluation4. Qualitative analysis • Tracking the state of entityblood across 6 sentences • Blue: true location • Orange: predicted results from Pro-Local (Dalvi et al. 2018) • Red: predicted results from KG-MRC
Conclusions • Proposedamodelthatconstructsdynamicknowledgegraphsfromtexttotracklocationsofparticipantsentitiesinproceduraltext. • KG-MRCimprovesthedownstreamcomprehensionoftextandachievesstate-of-theartresultsontwoquestion-answeringtasks.