Development of Few-Shot Learning Capabilities in Artificial Neural Networks When Learning Through Self-Supervised Interaction

IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):209-219. doi: 10.1109/TPAMI.2023.3323040. Epub 2023 Dec 5.

Abstract

Most artificial neural networks used for object recognition are trained in a fully supervised setup. This is not only resource consuming as it requires large data sets of labeled examples but also quite different from how humans learn. We use a setup in which an artificial agent first learns in a simulated world through self-supervised, curiosity-driven exploration. Following this initial learning phase, the learned representations can be used to quickly associate semantic concepts such as different types of doors using one or more labeled examples. To do this, we use a method we call fast concept mapping which uses correlated firing patterns of neurons to define and detect semantic concepts. This association works instantaneously with very few labeled examples, similar to what we observe in humans in a phenomenon called fast mapping. Strikingly, we can already identify objects with as little as one labeled example which highlights the quality of the encoding learned self-supervised through interaction with the world. It therefore presents a feasible strategy for learning concepts without much supervision and shows that through pure interaction meaningful representations of an environment can be learned that work better for few-short learning than non-interactive methods.