MYFix: Automated Fixation Annotation of Eye-Tracking Videos

Sensors (Basel). 2024 Apr 23;24(9):2666. doi: 10.3390/s24092666.

Abstract

In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation.

Keywords: automatic fixation annotation; object detection; outdoor mobile eye-tracking; semantic segmentation.

MeSH terms

  • Algorithms
  • Eye Movements / physiology
  • Eye-Tracking Technology*
  • Fixation, Ocular* / physiology
  • Humans
  • Video Recording / methods

Grants and funding

This research received no external funding.