ProtoCLIP: Prototypical Contrastive Language Image Pretraining

IEEE Trans Neural Netw Learn Syst. 2023 Dec 4:PP. doi: 10.1109/TNNLS.2023.3335859. Online ahead of print.

Abstract

Contrastive language image pretraining (CLIP) has received widespread attention since its learned representations can be transferred well to various downstream tasks. During the training process of the CLIP model, the InfoNCE objective aligns positive image-text pairs and separates negative ones. We show an underlying representation grouping effect during this process: the InfoNCE objective indirectly groups semantically similar representations together via randomly emerged within-modal anchors. Based on this understanding, in this article, prototypical contrastive language image pretraining (ProtoCLIP) is introduced to enhance such grouping by boosting its efficiency and increasing its robustness against the modality gap. Specifically, ProtoCLIP sets up prototype-level discrimination between image and text spaces, which efficiently transfers higher level structural knowledge. Furthermore, prototypical back translation (PBT) is proposed to decouple representation grouping from representation alignment, resulting in effective learning of meaningful representations under a large modality gap. The PBT also enables us to introduce additional external teachers with richer prior language knowledge. ProtoCLIP is trained with an online episodic training strategy, which means it can be scaled up to unlimited amounts of data. We trained our ProtoCLIP on conceptual captions (CCs) and achieved an + 5.81% ImageNet linear probing improvement and an + 2.01% ImageNet zero-shot classification improvement. On the larger YFCC-15M dataset, ProtoCLIP matches the performance of CLIP with 33% of training time.