NeRF-In: Free-Form Inpainting for Pretrained NeRF With RGB-D Priors

IEEE Comput Graph Appl. 2024 Mar-Apr;44(2):100-109. doi: 10.1109/MCG.2023.3336224. Epub 2024 Mar 25.

Abstract

Neural radiance field (NeRF) has emerged as a versatile scene representation. However, it is still unintuitive to edit a pretrained NeRF because the network parameters and the scene appearance are often not explicitly associated. In this article, we introduce the first framework that enables users to retouch undesired regions in a pretrained NeRF scene without accessing any training data and category-specific data prior. The user first draws a free-form mask to specify a region containing the unwanted objects over an arbitrary rendered view from the pretrained NeRF. Our framework transfers the user-drawn mask to other rendered views and estimates guiding color and depth images within transferred masked regions. Next, we formulate an optimization problem that jointly inpaints the image content in all masked regions by updating NeRF's parameters. We demonstrate our framework on diverse scenes and show it obtained visually plausible and structurally consistent results using less user manual efforts.