Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization

IEEE Trans Cybern. 2023 Jul 24:PP. doi: 10.1109/TCYB.2023.3287596. Online ahead of print.

Abstract

The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the loser particles. This approach has very promising performance in solving large-scale multiobjective optimization problems (LMOPs). However, most studies of CSOs ignore the evolution of the winner particles, although their quality is very important for the final optimization performance. Aiming to fill this research gap, this article proposes a new neural net-enhanced CSO for solving LMOPs, called NN-CSO, which not only guides the loser particles via the original CSO strategy, but also applies our trained neural network (NN) model to evolve winner particles. First, the swarm particles are classified into winner and loser particles by the pairwise competition. Then, the loser particles and winner particles are, respectively, treated as the input and desired output to train the NN model, which tries to learn promising evolutionary dynamics by driving the loser particles toward the winners. Finally, when model training is complete, the winner particles are evolved by the well-trained NN model, while the loser particles are still guided by the winner particles to maintain the search pattern of CSOs. To evaluate the performance of our designed NN-CSO, several LMOPs with up to ten objectives and 1000 decision variables are adopted, and the experimental results show that our designed NN model can significantly improve the performance of CSOs and shows some advantages over several state-of-the-art large-scale multiobjective evolutionary algorithms as well as over model-based evolutionary algorithms.