Precise spatial memory in local random networks

Phys Rev E. 2020 Aug;102(2-1):022405. doi: 10.1103/PhysRevE.102.022405.

Abstract

Self-sustained, elevated neuronal activity persisting on timescales of 10 s or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we study numerical simulations of a geometrically embedded model with a local, but otherwise random, connectivity profile; imposing a global regulation of our system's mean firing rate produces localized, finely spaced discrete attractors that effectively span a two-dimensional manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We then measure the network's storage capacity numerically and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.