A comparison of point-tracking algorithms in ultrasound videos from the upper limb

Biomed Eng Online. 2023 May 24;22(1):52. doi: 10.1186/s12938-023-01105-y.

Abstract

Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas-Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable.

Keywords: Neural network tracking; OpenCV; Point tracking; Tracking correction; Ultrasound.

MeSH terms

  • Algorithms*
  • Motion
  • Neural Networks, Computer*
  • Ultrasonography
  • Upper Extremity / diagnostic imaging