Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation

Front Radiol. 2023 Jun 22:3:1088068. doi: 10.3389/fradi.2023.1088068. eCollection 2023.

Abstract

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.

Keywords: annotation; chest x-ray (CXR); eye tracking; gaze; interpretability; localization.

Grants and funding

This research was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Number R21EB028367.