A Kalman Variational Autoencoder Model assisted by Odometric Clustering for Video Frame Prediction and Anomaly Detection

IEEE Trans Image Process. 2022 Dec 16:PP. doi: 10.1109/TIP.2022.3229620. Online ahead of print.

Abstract

The combination of different sensory information to predict upcoming situations is an innate capability of intelligent beings. Consequently, various studies in the Artificial Intelligence field are currently being conducted to transfer this ability to artificial systems. Autonomous vehicles can particularly benefit from the combination of multi-modal information from the different sensors of the agent. This paper proposes a method for video-frame prediction that leverages odometric data. It can then serve as a basis for anomaly detection. A Dynamic Bayesian Network framework is adopted, combined with the use of Deep Learning methods to learn an appropriate latent space. First, a Markov Jump Particle Filter is built over the odometric data. This odometry model comprises a set of clusters. As a second step, the video model is learned. It is composed of a Kalman Variational Autoencoder modified to leverage the odometry clusters for focusing its learning attention on features related to the dynamic tasks that the vehicle is performing. We call the obtained overall model Cluster-Guided Kalman Variational Autoencoder. Evaluation is conducted using data from a car moving in a closed environment [1] and leveraging a part of the University of Alcalá DriveSet dataset [2], where several drivers move in a normal and drowsy way along a secondary road.