Learning feature representations with a cost-relevant sparse autoencoder

Int J Neural Syst. 2015 Feb;25(1):1450034. doi: 10.1142/S0129065714500348.

Abstract

There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

Keywords: Sparse autoencoder; unsupervised feature learning; weighted cost function.

MeSH terms

  • Algorithms
  • Artificial Intelligence*
  • Attention
  • Concept Formation / physiology*
  • Humans
  • Learning / physiology*
  • Neurons / physiology
  • Visual Perception / physiology*