Improved Panoramic Representation via Bidirectional Recurrent View Aggregation for Three-Dimensional Model Retrieval

IEEE Comput Graph Appl. 2019 Mar-Apr;39(2):65-76. doi: 10.1109/MCG.2018.2884861. Epub 2018 Dec 6.

Abstract

In a view-based three-dimensional (3-D) model retrieval task, extracting discriminative high-level features of models from projected images is considered as an effective approach. The challenge of view-based 3-D shape retrieval is that the shape information of each view is limited due to information deficiency in projection. Traditional methods in this direction mostly convert the model into a panoramic view, making it hard to recognize the original shape. To resolve this problem, we propose a novel deep neural network, recurrent panorama network (RePanoNet), which can learn to build panoramic representation from view sequences. A view sequence is rendered at a circle around the model to provide enough panoramic information. For each view sequence, we employ the bidirectional long short-term memory in RePanoNet to recognize spatial correlations between adjacent views to construct a panoramic feature. In our experiments on ModelNet and ShapeNet Core55, RePanoNet outperforms the methods in the state of the art, which demonstrates its effectiveness.

Publication types

  • Research Support, Non-U.S. Gov't