COVIDSum: A linguistically enriched SciBERT-based summarization model for COVID-19 scientific papers

J Biomed Inform. 2022 Mar:127:103999. doi: 10.1016/j.jbi.2022.103999. Epub 2022 Jan 30.

Abstract

The coronavirus disease (COVID-19) has claimed the lives of over 350,000 people and infected more than 173 million people worldwide, it triggers researchers from diverse fields are accelerating their research to help diagnostics, therapies, and vaccines. Researchers also publish their recent research progress through scientific papers. However, manually writing the abstract of a paper is time-consuming, and it increases the writing burden of the researchers. Abstractive summarization technique which automatically provides researchers reliable draft abstracts, can alleviate this problem. In this work, we propose a linguistically enriched SciBERT-based summarization model for COVID-19 scientific papers, named COVIDSum. Specifically, we first extract salient sentences from source papers and construct word co-occurrence graphs. Then, we adopt a SciBERT-based sequence encoder and a Graph Attention Networks-based graph encoder to encode sentences and word co-occurrence graphs, respectively. Finally, we fuse the above two encodings and generate an abstractive summary of each scientific paper. When evaluated on the publicly available COVID-19 open research dataset, the performance of our proposed model achieves significant improvement compared with other document summarization models.

Keywords: Abstractive summarization; COVID-19 scientific papers; Linguistically enriched pre-trained language model; SciBERT.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • COVID-19*
  • Humans
  • Language
  • Publishing
  • SARS-CoV-2