Predicting glucose level with an adapted branch predictor

Comput Biol Med. 2022 Jun:145:105388. doi: 10.1016/j.compbiomed.2022.105388. Epub 2022 Mar 19.

Abstract

Background and objective: Diabetes mellitus manifests as prolonged elevated blood glucose levels resulting from impaired insulin production. Such high glucose levels over a long period of time damage multiple internal organs. To mitigate this condition, researchers and engineers have developed the closed loop artificial pancreas consisting of a continuous glucose monitor and an insulin pump connected via a microcontroller or smartphone. A problem, however, is how to accurately predict short term future glucose levels in order to exert efficient glucose-level control. Much work in the literature focuses on least prediction error as a key metric and therefore pursues complex prediction methods such a deep learning. Such an approach neglects other important and significant design issues such as method complexity (impacting interpretability and safety), hardware requirements for low-power devices such as the insulin pump, the required amount of input data for training (potentially rendering the method infeasible for new patients), and the fact that very small improvements in accuracy may not have significant clinical benefit.

Methods: We propose a novel low-complexity, explainable blood glucose prediction method derived from the Intel P6 branch predictor algorithm. We use Meta-Differential Evolution to determine predictor parameters on training data splits of the benchmark datasets we use. A comparison is made between our new algorithm and a state-of-the-art deep-learning method for blood glucose level prediction.

Results: To evaluate the new method, the Blood Glucose Level Prediction Challenge benchmark dataset is utilised. On the official test data split after training, the state-of-the-art deep learning method predicted glucose levels 30 min ahead of current time with 96.3% of predicted glucose levels having relative error less than 30% (which is equivalent to the safe zone of the Surveillance Error Grid). Our simpler, interpretable approach prolonged the prediction horizon by another 5 min with 95.8% of predicted glucose levels of all patients having relative error less than 30%.

Conclusions: When considering predictive performance as assessed using the Blood Glucose Level Prediction Challenge benchmark dataset and Surveillance Error Grid metrics, we found that the new algorithm delivered comparable predictive accuracy performance, while operating only on the glucose-level signal with considerably less computational complexity.

Keywords: Blood glucose level; Computational costs; Deep learning; Interpretable models; Pattern learning; Time series forecasting.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Blood Glucose
  • Blood Glucose Self-Monitoring*
  • Diabetes Mellitus, Type 1*
  • Humans
  • Insulin

Substances

  • Blood Glucose
  • Insulin