Deep Multitask Learning by Stacked Long Short-Term Memory for Predicting Personalized Blood Glucose Concentration

IEEE J Biomed Health Inform. 2023 Jan 2:PP. doi: 10.1109/JBHI.2022.3233486. Online ahead of print.

Abstract

The adverse glycemic events triggered by the inaccurate insulin infusion in Type I diabetes (T1D) can lead to fatal complications. Predicting blood glucose concentration (BGC) based on clinical health records is critical for control algorithms in the artificial pancreas (AP) and aiding in medical decision support. This paper presents a novel deep learning (DL) model incorporating multitask learning (MTL) for personalized blood glucose prediction. The network architecture consists of shared and clustered hidden layers. Two layers of stacked long short-term memory (LSTM) form the shared hidden layers that learn generalized features from all subjects. The clustered hidden layers comprise two dense layers adapting to the gender-specific variability in the data. Finally, the subject-specific dense layers offer additional fine-tuning to personalized glucose dynamics resulting in an accurate BGC prediction at the output. OhioT1DM clinical dataset is used for the training and performance evaluation of the proposed model. A detailed analytical and clinical assessment have been performed using root mean square (RMSE), mean absolute error (MAE), and Clarke error grid analysis (EGA), respectively, which demonstrates the robustness and reliability of the proposed method. Consistently leading performance has been achieved for 30- (RMSE = 16.06 ±2.74, MAE = 10.64 ±1.35), 60- (RMSE = 30.89 ±4.31, MAE = 22.07 ±2.96), 90- (RMSE = 40.51 ±5.16, MAE = 30.16 ±4.10), and 120-minute (RMSE = 47.39 ±5.62, MAE = 36.36 ±4.54) prediction horizon (PH). In addition, the EGA analysis confirms the clinical feasibility by maintaining more than 94 % BGC predictions in the clinically safe zone for up to 120-minute PH. Moreover, the improvement is established by benchmarking against the state-of-the-art statistical, machine learning (ML), and deep learning (DL) methods.