Self-Reinforcing Unsupervised Matching

IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4404-4418. doi: 10.1109/TPAMI.2021.3061945. Epub 2022 Jul 1.

Abstract

Remarkable gains in deep learning usually benefit from large-scale supervised data. Ensuring the intra-class modality diversity in training set is critical for generalization capability of cutting-edge deep models, but it burdens human with heavy manual labor on data collection and annotation. In addition, some rare or unexpected modalities are new for the current model, causing reduced performance under such emerging modalities. Inspired by the achievements in speech recognition, psychology and behavioristics, we present a practical solution, self-reinforcing unsupervised matching (SUM), to annotate the images with 2D structure-preserving property in an emerging modality by cross-modality matching. Specifically, we propose a dynamic programming algorithm, dynamic position warping (DPW), to reveal the underlying element correspondence relationship between two matrix-form data in an order-preserving fashion, and devise a local feature adapter (LoFA) to allow for cross-modality similarity measurement. On these bases, we develop a two-tier self-reinforcing learning mechanism on both feature level and image level to optimize the LoFA. The proposed SUM framework requires no any supervision in emerging modality and only one template in seen modality, providing a promising route towards incremental learning and continual learning. Extensive experimental evaluation on two proposed challenging one-template visual matching tasks demonstrate its efficiency and superiority.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Humans