Cognitive Action Laws: The Case of Visual Features

IEEE Trans Neural Netw Learn Syst. 2020 Mar;31(3):938-949. doi: 10.1109/TNNLS.2019.2911174. Epub 2019 May 8.

Abstract

This paper proposes a theory for understanding perceptual learning processes within the general framework of laws of nature. Artificial neural networks are regarded as systems whose connections are Lagrangian variables, namely, functions depending on time. They are used to minimize the cognitive action, an appropriate functional index that measures the agent interactions with the environment. The cognitive action contains a potential and a kinetic term that nicely resemble the classic formulation of regularization in machine learning. A special choice of the functional index, which leads to the fourth-order differential equations-Cognitive Action Laws (CAL)-exhibits a structure that mirrors classic formulation of machine learning. In particular, unlike the action of mechanics, the stationarity condition corresponds with the global minimum. Moreover, it is proven that typical asymptotic learning conditions on the weights can coexist with the initialization provided that the system dynamics is driven under a policy referred to as information overloading control. Finally, the theory is experimented for the problem of feature extraction in computer vision.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Cognition* / physiology
  • Machine Learning*
  • Neural Networks, Computer*
  • Pattern Recognition, Automated / methods*
  • Photic Stimulation / methods