Structure and Base Analysis of Receptive Field Neural Networks in a Character Recognition Task

Sensors (Basel). 2022 Dec 12;22(24):9743. doi: 10.3390/s22249743.

Abstract

This paper explores extensions and restrictions of shallow convolutional neural networks with fixed kernels trained with a limited number of training samples. We extend the work recently done in research on Receptive Field Neural Networks (RFNN) and show their behaviour using different bases and step-by-step changes within the network architecture. To ensure the reproducibility of the results, we simplified the baseline RFNN architecture to a single-layer CNN network and introduced a deterministic methodology for RFNN training and evaluation. This methodology enabled us to evaluate the significance of changes using the (recently widely used in neural networks) Bayesian comparison. The results indicate that a change in the base may have less of an effect on the results than re-training using another seed. We show that the simplified network with tested bases has similar performance to the chosen baseline RFNN architecture. The data also show the positive impact of energy normalization of used filters, which improves the classification accuracy, even when using randomly initialized filters.

Keywords: fixed kernels; multiscale analysis; reproducibility of neural networks; shallow neural networks; structured receptive fields.

MeSH terms

  • Bayes Theorem
  • Neural Networks, Computer*
  • Reproducibility of Results
  • Seeds*