The art of compensation: How hybrid teams solve collective-risk dilemmas

PLoS One. 2024 Feb 9;19(2):e0297213. doi: 10.1371/journal.pone.0297213. eCollection 2024.

Abstract

It is widely known how the human ability to cooperate has influenced the thriving of our species. However, as we move towards a hybrid human-machine future, it is still unclear how the introduction of artificial agents in our social interactions affect this cooperative capacity. In a one-shot collective risk dilemma, where enough members of a group must cooperate in order to avoid a collective disaster, we study the evolutionary dynamics of cooperation in a hybrid population. In our model, we consider a hybrid population composed of both adaptive and fixed behavior agents. The latter serve as proxies for the machine-like behavior of artificially intelligent agents who implement stochastic strategies previously learned offline. We observe that the adaptive individuals adjust their behavior in function of the presence of artificial agents in their groups to compensate their cooperative (or lack of thereof) efforts. We also find that risk plays a determinant role when assessing whether or not we should form hybrid teams to tackle a collective risk dilemma. When the risk of collective disaster is high, cooperation in the adaptive population falls dramatically in the presence of cooperative artificial agents. A story of compensation, rather than cooperation, where adaptive agents have to secure group success when the artificial agents are not cooperative enough, but will rather not cooperate if the others do so. On the contrary, when risk of collective disaster is low, success is highly improved while cooperation levels within the adaptive population remain the same. Artificial agents can improve the collective success of hybrid teams. However, their application requires a true risk assessment of the situation in order to actually benefit the adaptive population (i.e. the humans) in the long-term.

MeSH terms

  • Biological Evolution
  • Cooperative Behavior*
  • Disasters*
  • Game Theory
  • Humans
  • Intelligence
  • Social Interaction

Grants and funding

I.T., P.S. and T.L are supported by an FWO (Fond Wetenschappelijk Onderzoek) project with grant no. G054919N. E.F.D. is supported by an F.R.S-FNRS (Fonds de la Recherche Scientifique) Chargé de Recherche grant (nr. 40005955). T.L. is furthermore supported by two F.R.S.-FNRS PDR (grant numbers 31257234 and 40007793) and acknowledges also the support by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215. E.F.D. and T.L. are supported by Service Public de Wallonie Recherche under grant n° 2010235–ariac by digitalwallonia4.ai. T.L. and P.S. acknowledge the support by the Flemish Government through the Flanders AI Research Program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.