Predicting group benefits in joint multiple object tracking : Predicting group benefits

Atten Percept Psychophys. 2023 Aug;85(6):1962-1975. doi: 10.3758/s13414-023-02693-6. Epub 2023 Jun 28.

Abstract

In everyday life, people often work together to accomplish a joint goal. Working together is often beneficial as it can result in a higher performance compared to working alone - a so-called "group benefit". While several factors influencing group benefits have been investigated in a range of tasks, to date, they have not been examined collectively with an integrative statistical approach such as linear modeling. To address this gap in the literature, we investigated several factors that are highly relevant for group benefits (i.e., task feedback, information about the co-actor's actions, the similarity in the individual performances, and personality traits) and used these factors as predictors in a linear model to predict group benefits in a joint multiple object tracking (MOT) task. In the joint MOT task, pairs of participants jointly tracked the movements of target objects among distractor objects and, depending on the experiment, either received group performance feedback, individual performance feedback, information about the group member's performed actions, or a combination of these types of information. We found that predictors collectively account for half of the variance and make non-redundant contributions towards predicting group benefits, suggesting that they independently influence group benefits. The model also accurately predicts group benefits, suggesting that it could be used to anticipate group benefits for individuals that have not yet performed a joint task together. Given that the investigated factors are relevant for other joint tasks, our model provides a first step towards developing a more general model for predicting group benefits across several shared tasks.

Keywords: Coordination; Joint action; Multiple object tracking; Social cognition.

MeSH terms

  • Humans
  • Movement*
  • Visual Perception*