FASSVid: Fast and Accurate Semantic Segmentation for Video Sequences

Entropy (Basel). 2022 Jul 7;24(7):942. doi: 10.3390/e24070942.

Abstract

Most of the methods for real-time semantic segmentation do not take into account temporal information when working with video sequences. This is counter-intuitive in real-world scenarios where the main application of such methods is, precisely, being able to process frame sequences as quickly and accurately as possible. In this paper, we address this problem by exploiting the temporal information provided by previous frames of the video stream. Our method leverages a previous input frame as well as the previous output of the network to enhance the prediction accuracy of the current input frame. We develop a module that obtains feature maps rich in change information. Additionally, we incorporate the previous output of the network into all the decoder stages as a way of increasing the attention given to relevant features. Finally, to properly train and evaluate our methods, we introduce CityscapesVid, a dataset specifically designed to benchmark semantic video segmentation networks. Our proposed network, entitled FASSVid improves the mIoU accuracy performance over a standard non-sequential baseline model. Moreover, FASSVid obtains state-of-the-art inference speed and competitive mIoU results compared to other state-of-the-art lightweight networks, with significantly lower number of computations. Specifically, we obtain 71% of mIoU in our CityscapesVid dataset, running at 114.9 FPS on a single NVIDIA GTX 1080Ti and 31 FPS on the NVIDIA Jetson Nano embedded board with images of size 1024×2048 and 512×1024, respectively.

Keywords: embedded systems; real-time processing; semantic segmentation; semantic video segmentation.

Grants and funding

The authors thank the National Science and Technology Council of Mexico(CONACyT) and the Instituto Politécnico Nacional for the financial support for this research. This research work was also supported by HEROES project. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101021801.