A Developmental Cognitive Architecture for Trust and Theory of Mind in Humanoid Robots

IEEE Trans Cybern. 2022 Mar;52(3):1947-1959. doi: 10.1109/TCYB.2020.3002892. Epub 2022 Mar 11.

Abstract

As artificial systems are starting to be widely deployed in real-world settings, it becomes critical to provide them with the ability to discriminate between different informants and to learn from reliable sources. Moreover, equipping an artificial agent to infer beliefs may improve the collaboration between humans and machines in several ways. In this article, we propose a hybrid cognitive architecture, called Thrive, with the purpose of unifying in a computational model recent discoveries regarding the underlying mechanism involved in trust. The model is based on biological observations that confirmed the role of the midbrain in trial-and-error learning, and on developmental studies that indicate how essential is a theory of mind in order to build empathetic trust. Thrive is build on top of an actor-critic framework that is used to stabilize the weights of two self-organizing maps. A Bayesian network embeds prior knowledge into an intrinsic environment, providing a measure of cost that is used to boostrap learning without an external reward signal. Following a developmental robotics approach, we embodied the model in the iCub humanoid robot and we replicated two psychological experiments. The results are in line with real data, and shed some light on the mechanisms involved in trust-based learning in children and robots.

MeSH terms

  • Bayes Theorem
  • Child
  • Cognition
  • Humans
  • Robotics* / methods
  • Theory of Mind*
  • Trust