Enriching contextualized language model from knowledge graph for biomedical information extraction

Brief Bioinform. 2021 May 20;22(3):bbaa110. doi: 10.1093/bib/bbaa110.

Abstract

Biomedical information extraction (BioIE) is an important task. The aim is to analyze biomedical texts and extract structured information such as named entities and semantic relations between them. In recent years, pre-trained language models have largely improved the performance of BioIE. However, they neglect to incorporate external structural knowledge, which can provide rich factual information to support the underlying understanding and reasoning for biomedical information extraction. In this paper, we first evaluate current extraction methods, including vanilla neural networks, general language models and pre-trained contextualized language models on biomedical information extraction tasks, including named entity recognition, relation extraction and event extraction. We then propose to enrich a contextualized language model by integrating a large scale of biomedical knowledge graphs (namely, BioKGLM). In order to effectively encode knowledge, we explore a three-stage training procedure and introduce different fusion strategies to facilitate knowledge injection. Experimental results on multiple tasks show that BioKGLM consistently outperforms state-of-the-art extraction models. A further analysis proves that BioKGLM can capture the underlying relations between biomedical knowledge concepts, which are crucial for BioIE.

Keywords: biomedical information extraction; knowledge graph; language model; neural network.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Data Mining*
  • Natural Language Processing*
  • Neural Networks, Computer*
  • Semantics