Improving domain generalization by hybrid domain attention and localized maximum sensitivity

Neural Netw. 2024 Mar:171:320-331. doi: 10.1016/j.neunet.2023.12.014. Epub 2023 Dec 14.

Abstract

Domain generalization has attracted much interest in recent years due to its practical application scenarios, in which the model is trained using data from various source domains but is tested using data from an unseen target domain. Existing domain generalization methods concern all visual features, including irrelevant ones with the same priority, which easily results in poor generalization performance of the trained model. In contrast, human beings have strong generalization capabilities to distinguish images from different domains by focusing on important features while suppressing irrelevant features with respect to labels. Motivated by this observation, we propose a channel-wise and spatial-wise hybrid domain attention mechanism to force the model to focus on more important features associated with labels in this work. In addition, models with higher robustness with respect to small perturbations of inputs are expected to have higher generalization capability, which is preferable in domain generalization. Therefore, we propose to reduce the localized maximum sensitivity of the small perturbations of inputs in order to improve the network's robustness and generalization capability. Extensive experiments on PACS, VLCS, and Office-Home datasets validate the effectiveness of the proposed method.

Keywords: Domain attention; Domain generalization; Localized maximum sensitivity.

MeSH terms

  • Generalization, Psychological*
  • Humans
  • Motivation*