This article presents a competitive learning-based Grey Wolf Optimizer (Clb-GWO) formulated through the introduction of competitive learning strategies to achieve a better trade-off between exploration and exploitation while promoting population diversity through the design of difference vectors. The proposed method integrates population sub-division into majority groups and minority groups with a dual search system arranged in a selective complementary manner. The proposed Clb-GWO is tested and validated through the recent CEC2020 and CEC2019 benchmarking suites followed by the optimal training of multi-layer perceptron's (MLPs) with five classification datasets and three function approximation datasets. Clb-GWO is compared against the standard version of GWO, five of its latest variants and two modern meta-heuristics. The benchmarking results and the MLP training results demonstrate the robustness of Clb-GWO. The proposed method performed competitively compared to all its competitors with statistically significant performance for the benchmarking tests. The performance of Clb-GWO the classification datasets and the function approximation datasets was excellent with lower error rates and least standard deviation rates.
Keywords: CEC2020 and CEC2019; Competitive learning-based Grey wolf Optimizer (Clb-GWO); Grey wolf Optimizer (GWO); Multi-layer perceptron training.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.