Cell Localization and Counting Using Direction Field Map

IEEE J Biomed Health Inform. 2022 Jan;26(1):359-368. doi: 10.1109/JBHI.2021.3105545. Epub 2022 Jan 17.

Abstract

Automatic cell counting in pathology images is challenging due to blurred boundaries, low-contrast, and overlapping between cells. In this paper, we train a convolutional neural network (CNN) to predict a two-dimensional direction field map and then use it to localize cell individuals for counting. Specifically, we define a direction field on each pixel in the cell regions (obtained by dilating the original annotation in terms of cell centers) as a two-dimensional unit vector pointing from the pixel to its corresponding cell center. Direction field for adjacent pixels in different cells have opposite directions departing from each other, while those in the same cell region have directions pointing to the same center. Such unique property is used to partition overlapped cells for localization and counting. To deal with those blurred boundaries or low contrast cells, we set the direction field of the background pixels to be zeros in the ground-truth generation. Thus, adjacent pixels belonging to cells and background will have an obvious difference in the predicted direction field. To further deal with cells of varying density and overlapping issues, we adopt geometry adaptive (varying) radius for cells of different densities in the generation of ground-truth direction field map, which guides the CNN model to separate cells of different densities and overlapping cells. Extensive experimental results on three widely used datasets (i.e., VGG Cell, CRCHistoPhenotype2016, and MBM datasets) demonstrate the effectiveness of the proposed approach.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Neural Networks, Computer*