Articulated Motion-Aware NeRF for 3D Dynamic Appearance and Geometry Reconstruction by Implicit Motion States

IEEE Trans Vis Comput Graph. 2024 May 14:PP. doi: 10.1109/TVCG.2024.3400830. Online ahead of print.

Abstract

We propose a self-supervised approach for 3D dynamic reconstruction of articulated motions based on Generative Adversarial Networks and Neural Radiance Fields. Our method reconstructs articulated objects and recover their continuous motions and attributes from an unordered, discontinuous image set. Notably, we treat motion states as time-independent, recognizing that articulated objects can exhibit identical motions at different times. The key insight of our approach utilizes generative adversarial networks to create a continuous implicit motion state space. Initially, we employ a motion network extracts discrete motion states from images as anchors. These anchors are then expanded across the latent space using generative adversarial networks. Subsequently, motion state latent codes are input into motion-aware neural radiance fields for dynamic appearance and geometry reconstruction. To deduce motion attributes from the continuously generated motions, we adopt a cluster-based strategy. We thoroughly evaluate and validate our method on both synthesized and real data, demonstrating superior fidelity in appearances, geometries, and motion attributes of articulated objects compared to state-of-the-art methods.