Threshold learning algorithm for memristive neural network with binary switching behavior

Neural Netw. 2024 Apr 30:176:106355. doi: 10.1016/j.neunet.2024.106355. Online ahead of print.

Abstract

On-chip learning is an effective method for adjusting artificial neural networks in neuromorphic computing systems by considering hardware intrinsic properties. However, it faces challenges due to hardware nonidealities, such as the nonlinearity of potentiation and depression and limitations on fine weight adjustment. In this study, we propose a threshold learning algorithm for a variation-tolerant ternary neural network in a memristor crossbar array. This algorithm utilizes two tightly separated resistance states in memristive devices to represent weight values. The high-resistance state (HRS) and low-resistance state (LRS) defined as read current of < 0.1 μA and > 1 μA, respectively, were successfully programmed in a 32 × 32 crossbar array, and exhibited half-normal distributions due to the programming method. To validate our approach experimentally, a 64 × 10 single-layer fully connected network were trained in the fabricated crossbar for an 8 × 8 MNIST dataset using the threshold learning algorithm, where the weight value is updated when a gradient determined by backpropagation exceeds a threshold value. Thanks to the large margin between the two states of the memristor, we observed only a 0.42 % drop in classification accuracy compared to the baseline network results. The threshold learning algorithm is expected to alleviate the programming burden and be utilized in variation-tolerant neuromorphic architectures.

Keywords: Memristor crossbar array; Neuromorphic system; Ternary neural network; Threshold learning algorithm.