D3R-Net: Dynamic Routing Residue Recurrent Network for Video Rain Removal

IEEE Trans Image Process. 2019 Feb;28(2):699-712. doi: 10.1109/TIP.2018.2869722. Epub 2018 Sep 12.

Abstract

In this paper, we address the problem of video rain removal by considering rain occlusion regions, i.e., very low light transmittance for rain streaks. Different from additive rain streaks, in such occlusion regions, the details of backgrounds are completely lost. Therefore, we propose a hybrid rain model to depict both rain streaks and occlusions. Integrating the hybrid model and useful motion segmentation context information, we present a Dynamic Routing Residue Recurrent Network (D3R-Net). D3R-Net first extracts the spatial features by a residual network. Then, the spatial features are aggregated by recurrent units along the temporal axis. In the temporal fusion, the context information is embedded into the network in a "dynamic routing" way. A heap of recurrent units takes responsibility for handling the temporal fusion in given contexts, e.g., rain or non-rain regions. In the certain forward and backward processes, one of these recurrent units is mainly activated. Then, a context selection gate is employed to detect the context and select one of these temporally fused features generated by these recurrent units as the final fused feature. Finally, this last feature plays a role of "residual feature." It is combined with the spatial feature and then used to reconstruct the negative rain streaks. In such a D3R-Net, we incorporate motion segmentation, which denotes whether a pixel belongs to fast moving edges or not, and rain type indicator, indicating whether a pixel belongs to rain streaks, rain occlusions, and non-rain regions, as the context variables. Extensive experiments on a series of synthetic and real videos with rain streaks verify not only the superiority of the proposed method over state of the art but also the effectiveness of our network design and its each component.