Federated Discriminative Representation Learning for Image Classification

IEEE Trans Neural Netw Learn Syst. 2023 Dec 6:PP. doi: 10.1109/TNNLS.2023.3336957. Online ahead of print.

Abstract

Acquiring big-size datasets to raise the performance of deep models has become one of the most critical problems in representation learning (RL) techniques, which is the core potential of the emerging paradigm of federated learning (FL). However, most current FL models concentrate on seeking an identical model for isolated clients and thus fail to make full use of the data specificity between clients. To enhance the classification performance of each client, this study introduces the FDRL, a federated discriminative RL model, by partitioning the data features of each client into a global subspace and a local subspace. More specifically, FDRL learns the global representation for federated communication between those isolated clients, which is to capture common features from all protected datasets via model sharing, and local representations for personalization in each client, which is to preserve specific features of clients via model differentiating. Toward this goal, FDRL in each client trains a shared submodel for federated communication and, meanwhile, a not-shared submodel for locality preservation, in which the two models partition client-feature space by maximizing their differences, followed by a linear model fed with combined features for image classification. The proposed model is implemented with neural networks and optimized in an iterative manner between the server of computing the global model and the clients of learning the local classifiers. Thanks to the powerful capability of local feature preservation, FDRL leads to more discriminative data representations than the compared FL models. Experimental results on public datasets demonstrate that our FDRL benefits from the subspace partition and achieves better performance on federated image classification than the state-of-the-art FL models.