Prediction of COVID-19 epidemic situation via fine-tuned IndRNN

PeerJ Comput Sci. 2021 Nov 12:7:e770. doi: 10.7717/peerj-cs.770. eCollection 2021.

Abstract

The COVID-19 pandemic is the most serious catastrophe since the Second World War. To predict the epidemic more accurately under the influence of policies, a framework based on Independently Recurrent Neural Network (IndRNN) with fine-tuning are proposed for predict the epidemic development trend of confirmed cases and deaths in the United Stated, India, Brazil, France, Russia, China, and the world to late May, 2021. The proposed framework consists of four main steps: data pre-processing, model pre-training and weight saving, the weight fine-tuning, trend predicting and validating. It is concluded that the proposed framework based on IndRNN and fine-tuning with high speed and low complexity, has great fitting and prediction performance. The applied fine-tuning strategy can effectively reduce the error by up to 20.94% and time cost. For most of the countries, the MAPEs of fine-tuned IndRNN model were less than 1.2%, the minimum MAPE and RMSE were 0.05%, and 1.17, respectively, by using Chinese deaths, during the testing phase. According to the prediction and validation results, the MAPEs of the proposed framework were less than 6.2% in most cases, and it generated lowest MAPE and RMSE values of 0.05% and 2.14, respectively, for deaths in China. Moreover, Policies that play an important role in the development of COVID-19 have been summarized. Timely and appropriate measures can greatly reduce the spread of COVID-19; untimely and inappropriate government policies, lax regulations, and insufficient public cooperation are the reasons for the aggravation of the epidemic situations. The code is available at https://github.com/zhhongsh/COVID19-Precdiction. And the prediction by IndRNN model with fine-tuning are now available online (http://47.117.160.245:8088/IndRNNPredict).

Keywords: COVID-19; Deep Learning; Fine-tuning; Gated-Recurrent-Unit; Independently Recurrent Neural Network; Long-Short-Term-Memory; Prediction Model.

Grants and funding

This work was supported by the National Key R&D Program of China under Grant 2018YFB0505400 and the National Natural Science Foundation of China under Grant 41871325. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.