Modality attention fusion model with hybrid multi-head self-attention for video understanding

PLoS One. 2022 Oct 6;17(10):e0275156. doi: 10.1371/journal.pone.0275156. eCollection 2022.

Abstract

Video question answering (Video-QA) is a subject undergoing intense study in Artificial Intelligence, which is one of the tasks which can evaluate such AI abilities. In this paper, we propose a Modality Attention Fusion framework with Hybrid Multi-head Self-attention (MAF-HMS). MAF-HMS focuses on the task of answering multiple-choice questions regarding a video-subtitle-QA representation by fusion of attention and self-attention between each modality. We use BERT to extract text features, and use Faster R-CNN to ex-tract visual features to provide a useful input representation for our model to answer questions. In addition, we have constructed a Modality Attention Fusion (MAF) framework for the attention fusion matrix from different modalities (video, subtitles, QA), and use a Hybrid Multi-headed Self-attention (HMS) to further determine the correct answer. Experiments on three separate scene datasets show our overall model outperforms the baseline methods by a large margin. Finally, we conducted extensive ablation studies to verify the various components of the network and demonstrate the effectiveness and advantages of our method over existing methods through question type and required modality experimental results.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Attention
  • Communications Media*
  • Information Storage and Retrieval

Grants and funding

This work was supported by the following grants: National Natural Science Foundation of China 61772321; Shandong Natural Science Foundation ZR202011020044. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.