Weighted Gate Layer Autoencoders

IEEE Trans Cybern. 2022 Aug;52(8):7242-7253. doi: 10.1109/TCYB.2021.3049583. Epub 2022 Jul 19.

Abstract

A single dataset could hide a significant number of relationships among its feature set. Learning these relationships simultaneously avoids the time complexity associated with running the learning algorithm for every possible relationship, and affords the learner with an ability to recover missing data and substitute erroneous ones by using available data. In our previous research, we introduced the gate-layer autoencoders (GLAEs), which offer an architecture that enables a single model to approximate multiple relationships simultaneously. GLAE controls what an autoencoder learns in a time series by switching on and off certain input gates, thus, allowing and disallowing the data to flow through the network to increase network's robustness. However, GLAE is limited to binary gates. In this article, we generalize the architecture to weighted gate layer autoencoders (WGLAE) through the addition of a weight layer to update the error according to which variables are more critical and to encourage the network to learn these variables. This new weight layer can also be used as an output gate and uses additional control parameters to afford the network with abilities to represent different models that can learn through gating the inputs. We compare the architecture against similar architectures in the literature and demonstrate that the proposed architecture produces more robust autoencoders with the ability to reconstruct both incomplete synthetic and real data with high accuracy.

MeSH terms

  • Algorithms*
  • Neural Networks, Computer*