Interactionally Embedded Gestalt Principles of Multimodal Human Communication

Perspect Psychol Sci. 2023 Sep;18(5):1136-1159. doi: 10.1177/17456916221141422. Epub 2023 Jan 12.

Abstract

Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.

Keywords: binding; gestalt perception; interaction; language; multimodality; segregation.

Publication types

  • Review

MeSH terms

  • Communication*
  • Humans
  • Language*
  • Speech
  • Visual Perception