Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer

IEEE Trans Neural Netw Learn Syst. 2023 Jan 17:PP. doi: 10.1109/TNNLS.2023.3236336. Online ahead of print.

Abstract

Filter pruning is the most representative technique for lightweighting convolutional neural networks (CNNs). In general, filter pruning consists of the pruning and fine-tuning phases, and both still require a considerable computational cost. So, to increase the usability of CNNs, filter pruning itself needs to be lightweighted. For this purpose, we propose a coarse-to-fine neural architecture search (NAS) algorithm and a fine-tuning structure based on contrastive knowledge transfer (CKT). First, candidates of subnetworks are coarsely searched by a filter importance scoring (FIS) technique, and then the best subnetwork is obtained by a fine search based on NAS-based pruning. The proposed pruning algorithm does not require a supernet and adopts a computationally efficient search process, so it can create a pruned network with higher performance at a lower cost than the existing NAS-based search algorithms. Next, a memory bank is configured to store the information of interim subnetworks, i.e., by-products of the above-mentioned subnetwork search phase. Finally, the fine-tuning phase delivers the information of the memory bank through a CKT algorithm. Thanks to the proposed fine-tuning algorithm, the pruned network accomplishes high performance and fast convergence speed because it can take clear guidance from the memory bank. Experiments on various datasets and models prove that the proposed method has a significant speed efficiency with reasonable performance leakage over the state-of-the-art (SOTA) models. For example, the proposed method pruned the ResNet-50 trained on Imagenet-2012 up to 40.01% with no accuracy loss. Also, since the computational cost amounts to only 210 GPU hours, the proposed method is computationally more efficient than SOTA techniques. The source code is publicly available at https://github.com/sseung0703/FFP.