Semantic and structural image segmentation for prosthetic vision

PLoS One. 2020 Jan 29;15(1):e0227677. doi: 10.1371/journal.pone.0227677. eCollection 2020.

Abstract

Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, the phosphenic images produced by the implants have very limited information bandwidth due to the poor resolution and lack of color or contrast. The ability of object recognition and scene understanding in real environments is severely restricted for prosthetic users. Computer vision can play a key role to overcome the limitations and to optimize the visual information in the prosthetic vision, improving the amount of information that is presented. We present a new approach to build a schematic representation of indoor environments for simulated phosphene images. The proposed method combines a variety of convolutional neural networks for extracting and conveying relevant information about the scene such as structural informative edges of the environment and silhouettes of segmented objects. Experiments were conducted with normal sighted subjects with a Simulated Prosthetic Vision system. The results show good accuracy for object recognition and room identification tasks for indoor scenes using the proposed approach, compared to other image processing methods.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Artificial Intelligence* / statistics & numerical data
  • Computer Simulation
  • Female
  • Healthy Volunteers
  • Humans
  • Image Processing, Computer-Assisted
  • Male
  • Middle Aged
  • Phosphenes / physiology
  • Photic Stimulation / methods
  • Psychophysics
  • Semantics
  • Vision Disorders / physiopathology
  • Vision Disorders / psychology
  • Vision Disorders / therapy
  • Visual Perception
  • Visual Prosthesis* / statistics & numerical data
  • Young Adult

Associated data

  • figshare/10.6084/m9.figshare.11493249.v4

Grants and funding

This work was supported by projects DPI2015-65962-R, RTI2018-096903-B-I00 (MINECO/FEDER, UE) and BES-2016-078426 (MINECO). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.