Accurate and Efficient Large-Scale Multi-Label Learning With Reduced Feature Broad Learning System Using Label Correlation

IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10240-10253. doi: 10.1109/TNNLS.2022.3165299. Epub 2023 Nov 30.

Abstract

Multi-label learning for large-scale data is a grand challenge because of a large number of labels with a complex data structure. Hence, the existing large-scale multi-label methods either have unsatisfactory classification performance or are extremely time-consuming for training utilizing a massive amount of data. A broad learning system (BLS), a flat network with the advantages of succinct structures, is appropriate for addressing large-scale tasks. However, existing BLS models are not directly applicable for large-scale multi-label learning due to the large and complex label space. In this work, a novel multi-label classifier based on BLS (called BLS-MLL) is proposed with two new mechanisms: kernel-based feature reduction module and correlation-based label thresholding. The kernel-based feature reduction module contains three layers, namely, the feature mapping layer, enhancement nodes layer, and feature reduction layer. The feature mapping layer employs elastic network regularization to solve the randomness of features in order to improve performance. In the enhancement nodes layer, the kernel method is applied for high-dimensional nonlinear conversion to achieve high efficiency. The newly constructed feature reduction layer is used to further significantly improve both the training efficiency and accuracy when facing high-dimensionality with abundant or noisy information embedded in large-scale data. The correlation-based label thresholding enables BLS-MLL to generate a label-thresholding function for effective conversion of the final decision values to logical outputs, thus, improving the classification performance. Finally, experimental comparisons among six state-of-the-art multi-label classifiers on ten datasets demonstrate the effectiveness of the proposed BLS-MLL. The results of the classification performance show that BLS-MLL outperforms the compared algorithms in 86% of cases with better training efficiency in 90% of cases.