Language Models meet Knowledge Graphs



Knowledge Graph (KG) is a set of curated facts in which a fact is presented as a directed labelled link between nodes (entities) [1]. KGs are particularly valuable in allowing AI systems to reason (deductively and inductively) across several domains. Hence, they act as the backbone of many tech giants, including Microsoft, IBM, and Google [2]. The problems of constructing, maintaining, and deploying KGs have been active research in recent years [1].

On the other hand, Language Models (LMs) have witnessed notable improvements in recent years. LMs are trained on massive unstructured textual corpora in an unsupervised manner. Then they are fine-tuned to handle various NLP tasks, including question answering, summarization, and text completion [3].

In this project, we investigate the bidirectional relationship between LMs and KGs to see how LMs can be utilized in KG-related learning tasks, including KG completion or how LMs can deploy KG to take advantage of structured information stored in a KG. The project includes surveying the literature, implementing/deep understanding of state-of-the-art methods and implementing the novel (to be proposed) method. 



[1] A. Hogan et al., “Knowledge Graphs,” arXiv Prepr., 2020,

[2] N. Noy, Y. Gao, A. Jain, A. Narayanan, A. Patterson, and J. Taylor, “Industry-scale Knowledge Graphs: Lessons and Challenges,” Queue, vol. 17, no. 2, p. 20, 2019.

[3] Q. XiPeng, S. TianXiang, X. YiGe, S. YunFan, D. Ning, and H. XuanJing, “Review. Pre-trained models for natural language processing: A survey,” Sci China Tech Sci, vol. 63, no. 10, pp. 1872–1897, 2020.


Knowledge Graph, Language Model, Natural Language Processing

Updated:  10 August 2021/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing