Mutually Guided Image Filtering

IEEE Trans Pattern Anal Mach Intell. 2020 Mar;42(3):694-707. doi: 10.1109/TPAMI.2018.2883553. Epub 2018 Nov 28.

Abstract

Filtering images is required by numerous multimedia, computer vision and graphics tasks. Despite diverse goals of different tasks, making effective rules is key to the filtering performance. Linear translation-invariant filters with manually designed kernels have been widely used. However, their performance suffers from content-blindness. To mitigate the content-blindness, a family of filters, called joint/guided filters, have attracted a great amount of attention from the community. The main drawback of most joint/guided filters comes from the ignorance of structural inconsistency between the reference and target signals like color, infrared, and depth images captured under different conditions. Simply adopting such guidelines very likely leads to unsatisfactory results. To address the above issues, this paper designs a simple yet effective filter, named mutually guided image filter (muGIF), which jointly preserves mutual structures, avoids misleading from inconsistent structures and smooths flat regions. The proposed muGIF is very flexible, which can work in various modes including dynamic only (self-guided), static/dynamic (reference-guided) and dynamic/dynamic (mutually guided) modes. Although the objective of muGIF is in nature non-convex, by subtly decomposing the objective, we can solve it effectively and efficiently. The advantages of muGIF in effectiveness and flexibility are demonstrated over other state-of-the-art alternatives on a variety of applications. Our code is publicly available at https://sites.google.com/view/xjguo/mugif.