Autoencoder-based training for multi-illuminant color constancy

J Opt Soc Am A Opt Image Sci Vis. 2022 Jun 1;39(6):1076-1084. doi: 10.1364/JOSAA.457751.

Abstract

Color constancy is an essential component of the human visual system. It enables us to discern the color of objects invariant to the illumination that is present. This ability is difficult to reproduce in software, as the underlying problem is ill posed, i.e., for each pixel in the image, we know only the RGB values, which are a product of the spectral characteristics of the illumination and the reflectance of objects, as well as the sensitivity of the sensor. To combat this, additional assumptions about the scene have to be made. These assumptions can be either handcrafted or learned using some deep learning technique. Nonetheless, they mostly work only for single illuminant images. In this work, we propose a method for learning these assumptions for multi-illuminant scenes using an autoencoder trained to reconstruct the original image by splitting it into its illumination and reflectance components. We then show that the estimation can be used as is or can be used alongside a clustering method to create a segmentation map of illuminations. We show that our method performs the best out of all tested methods in multi-illuminant scenes while being completely invariant to the number of illuminants.

MeSH terms

  • Color
  • Color Perception*
  • Humans
  • Lighting*
  • Photic Stimulation / methods