ECBC: Efficient Convolution via Blocked Columnizing

IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):433-445. doi: 10.1109/TNNLS.2021.3095276. Epub 2023 Jan 5.

Abstract

Direct convolution methods are now drawing increasing attention as they eliminate the additional storage demand required by indirect convolution algorithms (i.e., the transformed matrix generated by the im2col convolution algorithm). Nevertheless, the direct methods require special input-output tensor formatting, leading to extra time and memory consumption to get the desired data layout. In this article, we show that indirect convolution, if implemented properly, is able to achieve high computation performance with the help of highly optimized subroutines in matrix multiplication while avoid incurring substantial memory overhead. The proposed algorithm is called efficient convolution via blocked columnizing (ECBC). Inspired by the im2col convolution algorithm and the block algorithm of general matrix-to-matrix multiplication, we propose to conduct the convolution computation blockwisely. As a result, the tensor-to-matrix transformation process (e.g., the im2col operation) can also be done in a blockwise manner so that it only requires a small block of memory as small as the data block. Extensive experiments on various platforms and networks validate the effectiveness of ECBC, as well as the superiority of our proposed method against a set of widely used industrial-level convolution algorithms.