Fully automatic image colorization based on semantic segmentation technology

PLoS One. 2021 Nov 30;16(11):e0259953. doi: 10.1371/journal.pone.0259953. eCollection 2021.

Abstract

Aiming at these problems of image colorization algorithms based on deep learning, such as color bleeding and insufficient color, this paper converts the study of image colorization to the optimization of image semantic segmentation, and proposes a fully automatic image colorization model based on semantic segmentation technology. Firstly, we use the encoder as the local feature extraction network and use VGG-16 as the global feature extraction network. These two parts do not interfere with each other, but they share the low-level feature. Then, the first fusion module is constructed to merge local features and global features, and the fusion results are input into semantic segmentation network and color prediction network respectively. Finally, the color prediction network obtains the semantic segmentation information of the image through the second fusion module, and predicts the chrominance of the image based on it. Through several sets of experiments, it is proved that the performance of our model becomes stronger and stronger under the nourishment of the data. Even in some complex scenes, our model can predict reasonable colors and color correctly, and the output effect is very real and natural.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Color
  • Colorimetry / methods*
  • Coloring Agents / analysis
  • Coloring Agents / chemistry
  • Deep Learning
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer
  • Photography / methods*
  • Semantic Web / trends
  • Semantics
  • Technology

Substances

  • Coloring Agents

Grants and funding

This project was funded by National Natural Science Foundation of China under the grant 61303093 and 61402278. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. None of the authors received salaries from any of the funders.