Masked Transformer for Image Anomaly Localization

Int J Neural Syst. 2022 Jul;32(7):2250030. doi: 10.1142/S0129065722500307. Epub 2022 Jun 21.

Abstract

Image anomaly detection consists in detecting images or image portions that are visually different from the majority of the samples in a dataset. The task is of practical importance for various real-life applications like biomedical image analysis, visual inspection in industrial production, banking, traffic management, etc. Most of the current deep learning approaches rely on image reconstruction: the input image is projected in some latent space and then reconstructed, assuming that the network (mostly trained on normal data) will not be able to reconstruct the anomalous portions. However, this assumption does not always hold. We thus propose a new model based on the Vision Transformer architecture with patch masking: the input image is split in several patches, and each patch is reconstructed only from the surrounding data, thus ignoring the potentially anomalous information contained in the patch itself. We then show that multi-resolution patches and their collective embeddings provide a large improvement in the model's performance compared to the exclusive use of the traditional square patches. The proposed model has been tested on popular anomaly detection datasets such as MVTec and head CT and achieved good results when compared to other state-of-the-art approaches.

Keywords: Anomaly detection; image inpainting; self-supervised learning; vision transformer.

MeSH terms

  • Image Processing, Computer-Assisted* / methods
  • Tomography, X-Ray Computed* / methods