Saliency detection of textured 3D models based on multi-view information and texel descriptor

PeerJ Comput Sci. 2023 Oct 25:9:e1584. doi: 10.7717/peerj-cs.1584. eCollection 2023.

Abstract

Saliency-driven mesh simplification methods have shown promising results in maintaining visual detail, but effective simplification requires accurate 3D saliency maps. The conventional mesh saliency detection method may not capture salient regions in 3D models with texture. To address this issue, we propose a novel saliency detection method that fuses saliency maps from multi-view projections of textured models. Specifically, we introduce a texel descriptor that combines local convexity and chromatic aberration to capture texel saliency at multiple scales. Furthermore, we created a novel dataset that reflects human eye fixation patterns on textured models, which serves as an objective evaluation metric. Our experimental results demonstrate that our saliency-driven method outperforms existing approaches on several evaluation metrics. Our method source code can be accessed at https://github.com/bkballoon/mvsm-fusion and the dataset can be accessed at 10.5281/zenodo.8131602.

Keywords: Computer graphics; Computer vision; Dataset; Human eye fixation; Image saliency; Mesh saliency; Multi-view; Perception; Region of interest; Textured model saliency.

Grants and funding

This work was supported by the National Natural Science Foundation of China under Grant U19A2063 and by the Jilin Provincial Science & Technology Development Program of China under Grant 20230201080GX. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.