Goal-proximity decision-making

Cogn Sci. 2013 May-Jun;37(4):757-74. doi: 10.1111/cogs.12034. Epub 2013 Mar 29.

Abstract

Reinforcement learning (RL) models of decision-making cannot account for human decisions in the absence of prior reward or punishment. We propose a mechanism for choosing among available options based on goal-option association strengths, where association strengths between objects represent previously experienced object proximity. The proposed mechanism, Goal-Proximity Decision-making (GPD), is implemented within the ACT-R cognitive framework. GPD is found to be more efficient than RL in three maze-navigation simulations. GPD advantages over RL seem to grow as task difficulty is increased. An experiment is presented where participants are asked to make choices in the absence of prior reward. GPD captures human performance in this experiment better than RL.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Association Learning / physiology*
  • Computer Simulation
  • Decision Making / physiology*
  • Goals*
  • Humans
  • Models, Psychological
  • Punishment
  • Reinforcement, Psychology*