Self-controlled multilevel writing architecture for fast training in neuromorphic RRAM applications

Nanotechnology. 2018 Oct 5;29(40):405203. doi: 10.1088/1361-6528/aad2fa. Epub 2018 Jul 12.

Abstract

Memristor crossbar arrays naturally accelerate neural networks applications by carrying out parallel multiply-add operations. Due to the abrupt SET operation characterizing most RRAM devices, on-chip training usually requires either from iterative write/read stages, large and variation-sensitive circuitry, or both, to achieve multilevel capabilities. This paper presents a self-controlled architecture to program multilevel devices with a short and fixed operation duration. We rely on an ad hoc scheme to self-control the abrupt SET, choking the writing stimulus as the cell addresses the desired level. To achieve this goal, we make use of the voltage divider concept by placing a variable resistive load in series with the target cell. We validated the proposal against thorough simulations using RRAM cells fitting extremely fast physical devices and a commercial 40 nm CMOS technology, both exhibiting variability. For every case the proposed architecture allowed progressive and almost-linear resistive levels in each [Formula: see text] and [Formula: see text] crossbars structures.