Non-monotonic generation of knowledge paths for context understanding
Knowledge graphs can be used to enhance text search and access by augmenting textual content with relevant background knowledge. While many large knowledge graphs are available, using them to make semantic connections between entities mentioned in the textual content remains to be a difficult task....
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8326 https://ink.library.smu.edu.sg/context/sis_research/article/9329/viewcontent/3627994_pvoa_cc_nc.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Knowledge graphs can be used to enhance text search and access by augmenting textual content with relevant background knowledge. While many large knowledge graphs are available, using them to make semantic connections between entities mentioned in the textual content remains to be a difficult task. In this work, we therefore introduce contextual path generation (CPG) which refers to the task of generating knowledge paths, contextual path, to explain the semantic connections between entities mentioned in textual documents with given knowledge graph. To perform CPG task well, one has to address its three challenges, namely path relevance, incomplete knowledge graph, and path well-formedness. This paper designs a two-stage framework the comprising of the following: (1) a knowledge-enabled embedding matching and learning-to-rank with multi-head self attention context extractor to determine a set of context entities relevant to both the query entities and context document, and (2) a non-monotonic path generation method with pretrained transformer to generate high quality contextual paths. Our experiment results on two real-world datasets show that our best performing CPG model successfully recovers 84.13% of ground truth contextual paths, outperforming the context window baselines. Finally, we demonstrate that non-monotonic model generates more well-formed paths compared to the monotonic counterpart. |
---|