Saliency detection for stereoscopic images

IEEE Trans Image Process. 2014 Jun;23(6):2625-36. doi: 10.1109/TIP.2014.2305100.

Abstract

Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.

MeSH terms

  • Algorithms*
  • Artificial Intelligence
  • Image Enhancement / methods
  • Image Interpretation, Computer-Assisted / methods*
  • Imaging, Three-Dimensional / methods*
  • Pattern Recognition, Automated / methods*
  • Photogrammetry / methods*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Subtraction Technique*