Tweaking Deep Neural Networks

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5715-5728. doi: 10.1109/TPAMI.2021.3079511. Epub 2022 Aug 4.

Abstract

Deep neural networks are trained so as to achieve a kind of the maximum overall accuracy through a learning process using given training data. Therefore, it is difficult to fix them to improve the accuracies of specific problematic classes or classes of interest that may be valuable to some users or applications. To address this issue, we propose the synaptic join method to tweak neural networks by adding certain additional synapses from the intermediate hidden layers to the output layer across layers and additionally training only these synapses, if necessary. To select the most effective synapses, the synaptic join method evaluates the performance of all the possible candidate synapses between the hidden neurons and output neurons based on the distribution of all the possible proper weights. The experimental results show that the proposed method can effectively improve the accuracies of specific classes in a controllable way.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Learning / physiology
  • Machine Learning
  • Neural Networks, Computer*
  • Neurons