KL-DNAS: Knowledge Distillation-Based Latency Aware-Differentiable Architecture Search for Video Motion Magnification

IEEE Trans Neural Netw Learn Syst. 2024 Jan 8:PP. doi: 10.1109/TNNLS.2023.3346169. Online ahead of print.

Abstract

Video motion magnification is the task of making subtle minute motions visible. Many times subtle motion occurs while being invisible to the naked eye, e.g., slight deformations in muscles of an athlete, small vibrations in the objects, microexpression, and chest movement while breathing. Magnification of such small motions has resulted in various applications like posture deformities detection, microexpression recognition, and studying the structural properties. State-of-the-art (SOTA) methods have fixed computational complexity, which makes them less suitable for applications requiring different time constraints, e.g., real-time respiratory rate measurement and microexpression classification. To solve this problem, we propose a knowledge distillation-based latency aware-differentiable architecture search (KL-DNAS) method for video motion magnification. To reduce memory requirements and to improve denoising characteristics, we use a teacher network to search the network by parts using knowledge distillation (KD). Furthermore, search among different receptive fields and multifeature connections are applied for individual layers. Also, a novel latency loss is proposed to jointly optimize the target latency constraint and output quality. We are able to find 2.8 × smaller model than the SOTA method and better motion magnification with lesser distortions. https://github.com/jasdeep-singh-007/KL-DNAS.