Robust Incremental Broad Learning System for Data Streams of Uncertain Scale

IEEE Trans Neural Netw Learn Syst. 2024 May 17:PP. doi: 10.1109/TNNLS.2024.3396659. Online ahead of print.

Abstract

Due to its marvelous performance and remarkable scalability, a broad learning system (BLS) has aroused a wide range of attention. However, its incremental learning suffers from low accuracy and long training time, especially when dealing with unstable data streams, making it difficult to apply in real-world scenarios. To overcome these issues and enrich its relevant research, a robust incremental BLS (RI-BLS) is proposed. In this method, the proposed weight update strategy introduces two memory matrices to store the learned information, thus the computational procedure of ridge regression is decomposed, resulting in precomputed ridge regression. During incremental learning, RI-BLS updates two memory matrices and renews weights via precomputed ridge regression efficiently. In addition, this update strategy is theoretically analyzed in error, time complexity, and space complexity compared with existing incremental BLSs. Different from Greville's method used in the original incremental BLS, its results are closer to the solution of one-shot calculation. Compared with the existing incremental BLSs, the proposed method exhibits more stable time complexity and superior space complexity. The experiments prove that RI-BLS outperforms other incremental BLSs when handling both stable and unstable data streams. Furthermore, experiments demonstrate that the proposed weight update strategy applies to other random neural networks as well.