Automating weighing of faces and voices based on cue saliency in trustworthiness impressions

Sci Rep. 2023 Nov 16;13(1):20037. doi: 10.1038/s41598-023-45471-y.

Abstract

When encountering people, their faces are usually paired with their voices. We know that if the face looks familiar, and the voice is high-pitched, the first impression will be positive and trustworthy. But, how do we integrate these two multisensory physical attributes? Here, we explore 1) the automaticity of audiovisual integration in shaping first impressions of trustworthiness, and 2) the relative contribution of each modality in the final judgment. We find that, even though participants can focus their attention on one modality to judge trustworthiness, they fail to completely filter out the other modality for both faces (Experiment 1a) and voices (Experiment 1b). When asked to judge the person as a whole, people rely more on voices (Experiment 2) or faces (Experiment 3). We link this change to the distinctiveness of each cue in the stimulus set rather than a general property of the modality. Overall, we find that people weigh faces and voices automatically based on cue saliency when forming trustworthiness impressions.

MeSH terms

  • Attention
  • Cues*
  • Facial Expression
  • Humans
  • Physical Examination
  • Trust
  • Voice*