Interpretable prediction of brain activity during conversations from multimodal behavioral signals

PLoS One. 2024 Mar 21;19(3):e0284342. doi: 10.1371/journal.pone.0284342. eCollection 2024.

Abstract

We present an analytical framework aimed at predicting the local brain activity in uncontrolled experimental conditions based on multimodal recordings of participants' behavior, and its application to a corpus of participants having conversations with another human or a conversational humanoid robot. The framework consists in extracting high-level features from the raw behavioral recordings and applying a dynamic prediction of binarized fMRI-recorded local brain activity using these behavioral features. The objective is to identify behavioral features required for this prediction, and their relative weights, depending on the brain area under investigation and the experimental condition. In order to validate our framework, we use a corpus of uncontrolled conversations of participants with a human or a robotic agent, focusing on brain regions involved in speech processing, and more generally in social interactions. The framework not only predicts local brain activity significantly better than random, it also quantifies the weights of behavioral features required for this prediction, depending on the brain area under investigation and on the nature of the conversational partner. In the left Superior Temporal Sulcus, perceived speech is the most important behavioral feature for predicting brain activity, regardless of the agent, while several features, which differ between the human and robot interlocutors, contribute to the prediction in regions involved in social cognition, such as the TemporoParietal Junction. This framework therefore allows us to study how multiple behavioral signals from different modalities are integrated in individual brain regions during complex social interactions.

MeSH terms

  • Brain* / diagnostic imaging
  • Communication*
  • Humans
  • Magnetic Resonance Imaging
  • Speech
  • Temporal Lobe

Grants and funding

This research is supported by grants ANR-16-CONV-0002 by the Agence Nationale de la Recherche, and AAP-ID-17-46-170301-11.1 by the Aix-Marseille Université Excellence Initiative (A*MIDEX). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.