Using point cloud data to improve three dimensional gaze estimation

Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul:2017:795-798. doi: 10.1109/EMBC.2017.8036944.

Abstract

This paper addresses the problem of estimating gaze location in the 3D environment using a remote eye tracker. Instead of relying only on data provided by the eye tracker, we investigate how to integrate gaze direction with the point-cloud-based representation of the scene provided by a Kinect sensor. The algorithm first combines the gaze vectors for the two eyes provided by the eye tracker into a single gaze vector emanating from a point in between the two eyes. The gaze target in the three dimensional environment is then identified by finding the point in the 3D point cloud that is closest to the gaze vector. Our experimental results demonstrate that the estimate of the gaze target location provided by this method is significantly better than that provided when considering gaze information alone. It is also better than two other methods for integrating point cloud information: (1) finding the 3D point closest to the gaze location as estimated by triangulating the gaze vectors from the two eyes, and (2) finding the 3D point with smallest average distance to the two gaze vectors considered individually. The proposed method has an average error of 1.7 cm in a workspace of 25 × 23 × 24 cm located at a distance of 60 cm from the user.

MeSH terms

  • Algorithms
  • Eye
  • Fixation, Ocular*