Multi-level Semantic Feature Augmentation for One-shot Learning

IEEE Trans Image Process. 2019 Apr 9. doi: 10.1109/TIP.2019.2910052. Online ahead of print.

Abstract

The ability to quickly recognize and learn new visual concepts from limited samples enables humans to quickly adapt to new tasks and environments. This ability is enabled by semantic association of novel concepts with those that have already been learned and stored in memory. Computers can start to ascertain similar abilities by utilizing a semantic concept space. A concept space is a high-dimensional semantic space in which similar abstract concepts appear close and dissimilar ones far apart. In this paper, we propose a novel approach to one-shot learning that builds on this core idea. Our approach learns to map a novel sample instance to a concept, relates that concept to the existing ones in the concept space and, using these relationships, generates new instances, by interpolating among the concepts, to help learning. Instead of synthesizing new image instance, we propose to directly synthesize instance features by leveraging semantics using a novel auto-encoder network we call dual TriNet. The encoder part of the TriNet learns to map multi-layer visual features from CNN to a semantic vector. In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet. Two strategies in the semantic space are explored. Notably, this seemingly simple strategy results in complex augmented feature distributions in the image feature space, leading to substantially better performance. The codes and models are released in the github: https://github.com/tankche1/ Semantic-Feature-Augmentation-in-Few-shot-Learning.