Spatial and temporal dynamics of attentional guidance during inefficient visual search

PLoS One. 2008 May 21;3(5):e2219. doi: 10.1371/journal.pone.0002219.

Abstract

Spotting a prey or a predator is crucial in the natural environment and relies on the ability to extract quickly pertinent visual information. The experimental counterpart of this behavior is visual search (VS) where subjects have to identify a target amongst several distractors. In difficult VS tasks, it has been found that the reaction time (RT) is influenced by salience factors, such as the target-distractor similarity, and this finding is usually regarded as evidence for a guidance of attention by preattentive mechanisms. However, the use of RT measurements, a parameter which depends on multiple factors, allows only very indirect inferences about the underlying attentional mechanisms. The purpose of the present study was to determine the influence of salience factors on attentional guidance during VS, by measuring directly attentional allocation. We studied attention allocation by using a dual covert VS task in subjects who had 1) to detect a target amongst different items and 2) to report letters briefly flashed inside those items at different delays. As predicted, we showed that parallel processes guide attention towards the most relevant item by virtue of both goal-directed and stimulus-driven factors, and we demonstrated that this attentional selection is a prerequisite for target detection. In addition, we show that when the target is characterized by two features (conjunction VS), the goal-directed effects of both features are initially combined into a unique salience value, but at a later stage, grouping phenomena interact with the salience computation, and lead to the selection of a whole group of items. These results, in line with Guided Search Theory, show that efficient and rapid preattentive processes guide attention towards the most salient item, allowing to reduce the number of attentional shifts needed to find the target.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Attention*
  • Humans
  • Photic Stimulation
  • Task Performance and Analysis
  • Vision, Ocular*