Efficient Hybrid Training Method for Neuromorphic Hardware Using Analog Nonvolatile Memory

IEEE Trans Neural Netw Learn Syst. 2023 Nov 24:PP. doi: 10.1109/TNNLS.2023.3327906. Online ahead of print.

Abstract

Neuromorphic hardware using nonvolatile analog synaptic devices provides promising advantages of reducing energy and time consumption for performing large-scale vector-matrix multiplication (VMM) operations. However, the reported training methods for neuromorphic hardware have appreciably shown reduced accuracy due to the nonideal nature of analog devices, and use conductance tuning protocols that require substantial cost for training. Here, we propose a novel hybrid training method that efficiently trains the neuromorphic hardware using nonvolatile analog memory cells, and experimentally demonstrate the high performance of the method using the fabricated hardware. Our training method does not rely on the conductance tuning protocol to reflect weight updates to analog synaptic devices, which significantly reduces online training costs. When the proposed method is applied, the accuracy of the hardware-based neural network approaches to that of the software-based neural network after only one-epoch training, even if the fabricated synaptic array is trained for only the first synaptic layer. Also, the proposed hybrid training method can be efficiently applied to low-power neuromorphic hardware, including various types of synaptic devices whose weight update characteristics are extremely nonlinear. This successful demonstration of the proposed method in the fabricated hardware shows that neuromorphic hardware using nonvolatile analog memory cells becomes a more promising platform for future artificial intelligence.