Dense and Sparse Reconstruction Error Based Saliency Descriptor

IEEE Trans Image Process. 2016 Apr;25(4):1592-603. doi: 10.1109/TIP.2016.2524198.

Abstract

In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. First, we compute dense and sparse reconstruction errors on the background templates for each image region. Second, the reconstruction errors are propagated based on the contexts obtained from K -means clustering. Third, the pixel-level reconstruction error is computed by the integration of multi-scale reconstruction errors. Both the pixel-level dense and sparse reconstruction errors are then weighted by image compactness, which could more accurately detect saliency. In addition, we introduce a novel Bayesian integration method to combine saliency maps, which is applied to integrate the two saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against 24 state-of-the-art methods in terms of precision, recall, and F-measure on three public standard salient object detection databases.

Publication types

  • Research Support, Non-U.S. Gov't