Contrastive Adversarial Domain Adaptation Networks for Speaker Recognition

IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2236-2245. doi: 10.1109/TNNLS.2020.3044215. Epub 2022 May 2.

Abstract

Domain adaptation aims to reduce the mismatch between the source and target domains. A domain adversarial network (DAN) has been recently proposed to incorporate adversarial learning into deep neural networks to create a domain-invariant space. However, DAN's major drawback is that it is difficult to find the domain-invariant space by using a single feature extractor. In this article, we propose to split the feature extractor into two contrastive branches, with one branch delegating for the class-dependence in the latent space and another branch focusing on domain-invariance. The feature extractor achieves these contrastive goals by sharing the first and last hidden layers but possessing decoupled branches in the middle hidden layers. For encouraging the feature extractor to produce class-discriminative embedded features, the label predictor is adversarially trained to produce equal posterior probabilities across all of the outputs instead of producing one-hot outputs. We refer to the resulting domain adaptation network as "contrastive adversarial domain adaptation network (CADAN)." We evaluated the embedded features' domain-invariance via a series of speaker identification experiments under both clean and noisy conditions. Results demonstrate that the embedded features produced by CADAN lead to a 33% improvement in speaker identification accuracy compared with the conventional DAN.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Learning
  • Neural Networks, Computer*
  • Recognition, Psychology*