Deep-Growing Neural Network With Manifold Constraints for Hyperspectral Image Classification

IEEE Trans Neural Netw Learn Syst. 2023 Jul 12:PP. doi: 10.1109/TNNLS.2023.3292537. Online ahead of print.

Abstract

In the absence of sufficient labels, deep neural networks (DNNs) are prone to overfitting, resulting in poor performance and difficulty in training. Thus, many semisupervised methods aim to use unlabeled sample information to compensate for the lack of label quantity. However, as the available pseudolabels increase, the fixed structure of traditional models has difficulty in matching them, limiting their effectiveness. Therefore, a deep-growing neural network with manifold constraints (DGNN-MC) is proposed. It can deepen the corresponding network structure with the expansion of a high-quality pseudolabel pool and preserve the local structure between the original and high-dimensional data in semisupervised learning. First, the framework filters the output of the shallow network to obtain pseudolabeled samples with high confidence and adds them to the original training set to form a new pseudolabeled training set. Second, according to the size of the new training set, it increases the depth of the layers to obtain a deeper network and conducts the training. Finally, it obtains new pseudolabeled samples and deepens the layers again until the network growth is completed. The growing model proposed in this article can be applied to other multilayer networks, as their depth can be transformed. Taking HSI classification as an example, a natural semisupervised problem, the experimental results demonstrate the superiority and effectiveness of our method, which can mine more reliable information for better utilization and fully balance the growing amount of labeled data and network learning ability.