Using Highlighting to Train Attentional Expertise

PLoS One. 2016 Jan 8;11(1):e0146266. doi: 10.1371/journal.pone.0146266. eCollection 2016.

Abstract

Acquiring expertise in complex visual tasks is time consuming. To facilitate the efficient training of novices on where to look in these tasks, we propose an attentional highlighting paradigm. Highlighting involves dynamically modulating the saliency of a visual image to guide attention along the fixation path of a domain expert who had previously viewed the same image. In Experiment 1, we trained naive subjects via attentional highlighting on a fingerprint-matching task. Before and after training, we asked subjects to freely inspect images containing pairs of prints and determine whether the prints matched. Fixation sequences were automatically scored for the degree of expertise exhibited using a Bayesian discriminative model of novice and expert gaze behavior. Highlighted training causes gaze behavior to become more expert-like not only on the trained images but also on transfer images, indicating generalization of learning. In Experiment 2, to control for the possibility that the increase in expertise is due to mere exposure, we trained subjects via highlighting of fixation sequences from novices, not experts, and observed no transition toward expertise. In Experiment 3, to determine the specificity of the training effect, we trained subjects with expert fixation sequences from images other than the one being viewed, which preserves coarse-scale statistics of expert gaze but provides no information about fine-grain features. Observing at least a partial transition toward expertise, we obtain only weak evidence that the highlighting procedure facilitates the learning of critical local features. We discuss possible improvements to the highlighting procedure.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Association Learning / physiology*
  • Attention / physiology*
  • Bayes Theorem
  • Dermatoglyphics
  • Discrimination Learning / physiology*
  • Humans
  • Pattern Recognition, Visual / physiology*
  • Saccades / physiology*
  • Task Performance and Analysis

Grants and funding

This work was supported by the National Science Foundation, Directorate of Social, Behavioral and Economic Sciences (SBE-0542013 to MCM); National Science Foundation, Office of Multidisciplinary Activities (SMA-1041755 to MCM); National Science Foundation, Division of Social and Economic Sciences (SES-1461535 to MCM); and the National Institute of Justice (Grant #2009-DN-BX-K226 to TAB). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.