Using deep reinforcement learning to speed up collective cell migration

BMC Bioinformatics. 2019 Nov 25;20(Suppl 18):571. doi: 10.1186/s12859-019-3126-5.

Abstract

Background: Collective cell migration is a significant and complex phenomenon that affects many basic biological processes. The coordination between leader cell and follower cell affects the rate of collective cell migration. However, there are still very few papers on the impacts of the stimulus signal released by the leader on the follower. Tracking cell movement using 3D time-lapse microscopy images provides an unprecedented opportunity to systematically study and analyze collective cell migration.

Results: Recently, deep reinforcement learning algorithms have become very popular. In our paper, we also use this method to train the number of cells and control signals. By experimenting with single-follower cell and multi-follower cells, it is concluded that the number of stimulation signals is proportional to the rate of collective movement of the cells. Such research provides a more diverse approach and approach to studying biological problems.

Conclusion: Traditional research methods are always based on real-life scenarios, but as the number of cells grows exponentially, the research process is too time consuming. Agent-based modeling is a robust framework that approximates cells to isotropic, elastic, and sticky objects. In this paper, an agent-based modeling framework is used to establish a simulation platform for simulating collective cell migration. The goal of the platform is to build a biomimetic environment to demonstrate the importance of stimuli between the leading and following cells.

Keywords: Collective migration; Deep reinforcement learning; Leader-follower mechanism.

Publication types

  • Evaluation Study

MeSH terms

  • Algorithms
  • Animals
  • Cell Movement*
  • Cells / cytology*
  • Computer Simulation
  • Humans
  • Time-Lapse Imaging / methods*