TaughtNet: Learning Multi-Task Biomedical Named Entity Recognition From Single-Task Teachers

IEEE J Biomed Health Inform. 2023 May;27(5):2512-2523. doi: 10.1109/JBHI.2023.3244044. Epub 2023 May 4.

Abstract

In Biomedical Named Entity Recognition (BioNER), the use of current cutting-edge deep learning-based methods, such as deep bidirectional transformers (e.g. BERT, GPT-3), can be substantially hampered by the absence of publicly accessible annotated datasets. When the BioNER system is required to annotate multiple entity types, various challenges arise because the majority of current publicly available datasets contain annotations for just one entity type: for example, mentions of disease entities may not be annotated in a dataset specialized in the recognition of drugs, resulting in a poor ground truth when using the two datasets to train a single multi-task model. In this work, we propose TaughtNet, a knowledge distillation-based framework allowing us to fine-tune a single multi-task student model by leveraging both the ground truth and the knowledge of single-task teachers. Our experiments on the recognition of mentions of diseases, chemical compounds and genes show the appropriateness and relevance of our approach w.r.t. strong state-of-the-art baselines in terms of precision, recall and F1 scores. Moreover, TaughtNet allows us to train smaller and lighter student models, which may be easier to be used in real-world scenarios, where they have to be deployed on limited-memory hardware devices and guarantee fast inferences, and shows a high potential to provide explainability. We publicly release both our code on github1 and our multi-task model on the huggingface repository.2.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deep Learning*
  • Humans
  • Knowledge Bases