Temporal Relation Extraction with Joint Semantic and Syntactic Attention

Comput Intell Neurosci. 2022 Apr 28:2022:5680971. doi: 10.1155/2022/5680971. eCollection 2022.

Abstract

Determining the temporal relationship between events has always been a challenging natural language understanding task. Previous research mainly relies on neural networks to learn effective features or artificial language features to extract temporal relationships, which usually fails when the context between two events is complex or extensive. In this paper, we propose our JSSA (Joint Semantic and Syntactic Attention) model, a method that combines both coarse-grained information from semantic level and fine-grained information from syntactic level. We utilize neighbor triples of events on syntactic dependency trees and events triple to construct syntactic attention served as clue information and prior guidance for analyzing the context information. The experiment results on TB-Dense and MATRES datasets have proved the effectiveness of our ideas.

MeSH terms

  • Attention
  • Language*
  • Neural Networks, Computer
  • Semantics*