HRCL: Hierarchical Relation Contrastive Learning for Low-Resource Relation Extraction

IEEE Trans Neural Netw Learn Syst. 2024 Apr 29:PP. doi: 10.1109/TNNLS.2024.3386611. Online ahead of print.

Abstract

Low-resource relation extraction (LRE) aims to extract the relationships between given entities from natural language sentences in low-resource application scenarios, which has been an incredibly challenging task due to the limited annotated corpora. Existing studies either leverage self-training schemes to expand the scale of labeled data, while the error accumulation of pseudo-labels' selection bias provoke the gradual drift problem in subsequent relation prediction, or utilize the instance-wise contrastive learning that fails to distinguish those sentence pairs with similar semantics. To alleviate these defects, this article introduces a novel contrastive learning framework called hierarchical relation contrastive learning (HRCL) for LRE. HRCL leverages task-related instruction description and schema-constrained as prompts to generate high-level relation representations. To enhance the efficacy of contrastive learning, we further employ hierarchical affinity propagation clustering (HiPC) to derive hierarchical signals from relational feature space with a hierarchy cross-attention (HCA) mechanism and effectively optimize pair-level relation features through relation-wise contrastive learning. Exhaustive experiments have been conducted on five public relation extraction (RE) datasets in low-resource settings. The results demonstrate the effectiveness and robustness of HRCL and outperform the current state-of-the-art (SOTA) model by 6.56% on average in terms of B3F1 . Our source code is publicly available at https://github.com/Phevos75/HRCLRE.