Class-Variant Margin Normalized Softmax Loss for Deep Face Recognition

IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4742-4747. doi: 10.1109/TNNLS.2020.3017528. Epub 2021 Oct 5.

Abstract

In deep face recognition, the commonly used softmax loss and its newly proposed variations are not yet sufficiently effective to handle the class imbalance and softmax saturation issues during the training process while extracting discriminative features. In this brief, to address both issues, we propose a class-variant margin (CVM) normalized softmax loss, by introducing a true-class margin and a false-class margin into the cosine space of the angle between the feature vector and the class-weight vector. The true-class margin alleviates the class imbalance problem, and the false-class margin postpones the early individual saturation of softmax. With negligible computational complexity increment during training, the new loss function is easy to implement in the common deep learning frameworks. Comprehensive experiments on the LFW, YTF, and MegaFace protocols demonstrate the effectiveness of the proposed CVM loss function.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Automated Facial Recognition / methods
  • Automated Facial Recognition / trends*
  • Deep Learning / trends*
  • Humans
  • Neural Networks, Computer*