Nerve Segmentation with Deep Learning from Label-Free Endoscopic Images Obtained Using Coherent Anti-Stokes Raman Scattering

Biomolecules. 2020 Jul 8;10(7):1012. doi: 10.3390/biom10071012.

Abstract

Semantic segmentation with deep learning to extract nerves from label-free endoscopic images obtained using coherent anti-Stokes Raman scattering (CARS) for nerve-sparing surgery is described. We developed a CARS rigid endoscope in order to identify the exact location of peripheral nerves in surgery. Myelinated nerves are visualized with a CARS lipid signal in a label-free manner. Because the lipid distribution includes other tissues as well as nerves, nerve segmentation is required to achieve nerve-sparing surgery. We propose using U-Net with a VGG16 encoder as a deep learning model and pre-training with fluorescence images, which visualize the lipid distribution similar to CARS images, before fine-tuning with a small dataset of CARS endoscopy images. For nerve segmentation, we used 24 CARS and 1,818 fluorescence nerve images of three rabbit prostates. We achieved label-free nerve segmentation with a mean accuracy of 0.962 and an F 1 value of 0.860. Pre-training on fluorescence images significantly improved the performance of nerve segmentation in terms of the mean accuracy and F 1 value ( p < 0 . 05 ). Nerve segmentation of label-free endoscopic images will allow for safer endoscopic surgery, while reducing dysfunction and improving prognosis after surgery.

Keywords: coherent anti-Stokes Raman scattering endoscopy; deep learning; nerve imaging; semantic segmentation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Deep Learning
  • Endoscopy / instrumentation*
  • Image Interpretation, Computer-Assisted / methods*
  • Optical Imaging
  • Organ Sparing Treatments / instrumentation
  • Peripheral Nerves / diagnostic imaging*
  • Rabbits
  • Spectrum Analysis, Raman