3D Question Answering

IEEE Trans Vis Comput Graph. 2024 Mar;30(3):1772-1786. doi: 10.1109/TVCG.2022.3225327. Epub 2024 Jan 30.

Abstract

Visual question answering (VQA) has experienced tremendous progress in recent years. However, most efforts have only focused on 2D image question-answering tasks. In this article, we extend VQA to its 3D counterpart, 3D question answering (3DQA), which can facilitate a machine's perception of 3D real-world scenarios. Unlike 2D image VQA, 3DQA takes the color point cloud as input and requires both appearance and 3D geometrical comprehension to answer the 3D-related questions. To this end, we propose a novel transformer-based 3DQA framework "3DQA-TR", which consists of two encoders to exploit the appearance and geometry information, respectively. Finally, the multi-modal information about the appearance, geometry, and linguistic question can attend to each other via a 3D-linguistic Bert to predict the target answers. To verify the effectiveness of our proposed 3DQA framework, we further develop the first 3DQA dataset "ScanQA", which builds on the ScanNet dataset and contains over 10 K question-answer pairs for 806 scenes. To the best of our knowledge, ScanQA is the first large-scale dataset with natural-language questions and free-form answers in 3D environments that is fully human-annotated. We also use several visualizations and experiments to investigate the astonishing diversity of the collected questions and the significant differences between this task from 2D VQA and 3D captioning. Extensive experiments on this dataset demonstrate the obvious superiority of our proposed 3DQA framework over state-of-the-art VQA frameworks and the effectiveness of our major designs. Our code and dataset will be made publicly available to facilitate research in this direction. The code and data are available at http://shuquanye.com/3DQA_website/.