Altruistic Collaborative Learning

IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):1954-1964. doi: 10.1109/TNNLS.2022.3185961. Epub 2024 Feb 5.

Abstract

This article proposes a new learning paradigm based on the concept of concordant gradients for ensemble learning strategies. In this paradigm, learners update their weights if and only if the gradients of their cost functions are mutually concordant in a sense given by paper. The objective of the proposed concordant optimization framework is robustness against uncertainties by postponing to a later epoch, the consideration of examples associated with discordant directions during a training phase. Concordance constrained collaboration is shown to be relevant, especially in intricate classification issues where exclusive class labeling involves information bias due to correlated disturbances affecting almost all training examples. The first learning paradigm applies on a gradient descent strategy based on allied agents, subjected to concordance checking before moving forward in training epochs. The second learning paradigm is related to multivariate dense neural matrix fusion, where the fusion operator is itself a learnable neural operator. In addition to these paradigms, this article proposes a new categorical probability transform to enrich the existing collection and propose an alternative scenario for integrating penalized SoftMax information. Finally, this article assesses the relevance of the above contributions with respect to several deep learning frameworks and a collaborative classification involving dependent classes.