Data2MV - A user behaviour dataset for multi-view scenarios

Data Brief. 2023 Oct 20:51:109702. doi: 10.1016/j.dib.2023.109702. eCollection 2023 Dec.

Abstract

The Data2MV dataset contains gaze fixation data obtained through experimental procedures from a total of 45 participants using an Intel RealSense F200 camera module and seven different video playlists. Each of the playlists had an approximate duration of 20 minutes and was viewed at least 17 times, with raw tracking data being recorded with a 0.05 second interval. The Data2MV dataset encompasses a total of 1.000.845 gaze fixations, gathered across a total of 128 experiments. It is also composed of 68.393 image frames, extracted from each of the 6 videos selected for these experiments, and an equal quantity of saliency maps, generated from aggregate fixation data. Software tools to obtain saliency maps and generate complementary plots are also provided as an open-source software package. The Data2MV dataset was publicly released to the research community on Mendeley Data and constitutes an important contribution to reduce the current scarcity of such data, particularly in immersive, multi-view streaming scenarios.

Keywords: Adaptive streaming; Deep learning; Head-tracking; Multi-view; Multimedia; View prediction.