A Reinforcement Learning-Based Framework for Crowdsourcing in Massive Health Care Internet of Things

Big Data. 2022 Apr;10(2):161-170. doi: 10.1089/big.2021.0058. Epub 2021 Jul 28.

Abstract

Rapid advancements in the internet of things (IoT) are driving massive transformations of health care, which is one of the largest and critical global industries. Recent pandemics, such as coronavirus 2019 (COVID-19), include increasing demands for ubiquitous, preventive, and personalized health care to be provided to the public at reduced risks and costs with rapid care. Mobile crowdsourcing could potentially meet the future massive health care IoT (mH-IoT) demands by enabling anytime, anywhere sense and analyses of health-related data to tackle such a pandemic situation. However, data reliability and availability are among the many challenges for the realization of next-generation mH-IoT, especially in COVID-19 epidemics. Therefore, more intelligent and robust health care frameworks are required to tackle such pandemics. Recently, reinforcement learning (RL) has proven its strengths to provide intelligent data reliability and availability. The action-state learning procedure of RL-based frameworks enables the learning system to enhance the optimal use of the information as the time passes and data increases. In this article, we propose an RL-based crowd-to-machine (RLC2M) framework for mH-IoT, which leverages crowdsourcing and an RL model (Q-learning) to address the health care information processing challenges. The simulation results show that the proposed framework rapidly converges with accumulated rewards to reveal the sensing environment situation.

Keywords: big data analytic; crowdsouring; health care IoT; internet of medical things; massive data; reinforcement learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • COVID-19* / epidemiology
  • Crowdsourcing*
  • Delivery of Health Care
  • Humans
  • Internet of Things*
  • Reproducibility of Results