Accuracy Versus Simplification in an Approximate Logic Neural Model

IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):5194-5207. doi: 10.1109/TNNLS.2020.3027298. Epub 2021 Oct 27.

Abstract

An approximate logic neural model (ALNM) is a novel single-neuron model with plastic dendritic morphology. During the training process, the model can eliminate unnecessary synapses and useless branches of dendrites. It will produce a specific dendritic structure for a particular task. The simplified structure of ALNM can be substituted by a logic circuit classifier (LCC) without losing any essential information. The LCC merely consists of the comparator and logic NOT, AND, and OR gates. Thus, it can be easily implemented in hardware. However, the architecture of ALNM affects the learning capacity, generalization capability, computing time and approximation of LCC. Thus, a Pareto-based multiobjective differential evolution (MODE) algorithm is proposed to simultaneously optimize ALNM's topology and weights. MODE can generate a concise and accurate LCC for every specific task from ALNM. To verify the effectiveness of MODE, extensive experiments are performed on eight benchmark classification problems. The statistical results demonstrate that MODE is superior to conventional learning methods, such as the backpropagation algorithm and single-objective evolutionary algorithms. In addition, compared against several commonly used classifiers, both ALNM and LCC are capable of obtaining promising and competitive classification performances on the benchmark problems. Besides, the experimental results also verify that the LCC obtains the faster classification speed than the other classifiers.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Databases, Factual / standards*
  • Dendrites / physiology
  • Humans
  • Logic*
  • Neural Networks, Computer*
  • Neuronal Plasticity / physiology
  • Reproducibility of Results
  • Synapses / physiology