Person-Specific Gaze Estimation from Low-Quality Webcam Images

Sensors (Basel). 2023 Apr 20;23(8):4138. doi: 10.3390/s23084138.

Abstract

Gaze estimation is an established research problem in computer vision. It has various applications in real life, from human-computer interactions to health care and virtual reality, making it more viable for the research community. Due to the significant success of deep learning techniques in other computer vision tasks-for example, image classification, object detection, object segmentation, and object tracking-deep learning-based gaze estimation has also received more attention in recent years. This paper uses a convolutional neural network (CNN) for person-specific gaze estimation. The person-specific gaze estimation utilizes a single model trained for one individual user, contrary to the commonly-used generalized models trained on multiple people's data. We utilized only low-quality images directly collected from a standard desktop webcam, so our method can be applied to any computer system equipped with such a camera without additional hardware requirements. First, we used the web camera to collect a dataset of face and eye images. Then, we tested different combinations of CNN parameters, including the learning and dropout rates. Our findings show that building a person-specific eye-tracking model produces better results with a selection of good hyperparameters when compared to universal models that are trained on multiple users' data. In particular, we achieved the best results for the left eye with 38.20 MAE (Mean Absolute Error) in pixels, the right eye with 36.01 MAE, both eyes combined with 51.18 MAE, and the whole face with 30.09 MAE, which is equivalent to approximately 1.45 degrees for the left eye, 1.37 degrees for the right eye, 1.98 degrees for both eyes combined, and 1.14 degrees for full-face images.

Keywords: computer vision; convolution neural network; deep learning; gaze estimation.

MeSH terms

  • Computer Systems
  • Eye
  • Humans
  • Neural Networks, Computer*
  • Virtual Reality*

Grants and funding

This work was supported by statutory research funds from the Department of Applied Informatics, Silesian University of Technology, Gliwice, Poland. Peter Peer is partially supported by the Slovenian Research Agency ARRS through the Research Programme P2–0214 (A) “Computer Vision”.