Toward Learning Joint Inference Tasks for IASS-MTS Using Dual Attention Memory With Stochastic Generative Imputation

IEEE Trans Neural Netw Learn Syst. 2023 Aug 31:PP. doi: 10.1109/TNNLS.2023.3305542. Online ahead of print.

Abstract

Irregularly, asynchronously and sparsely sampled multivariate time series (IASS-MTS) are characterized by sparse and uneven time intervals and nonsynchronous sampling rates, posing significant challenges for machine learning models to learn complex relationships within and beyond IASS-MTS to support various inference tasks. The existing methods typically either focus solely on single-task forecasting or simply concatenate them through a separate preprocessing imputation procedure for the subsequent classification application. However, these methods often ignore valuable annotated labels or fail to discover meaningful patterns from unlabeled data. Moreover, the approach of separate prefilling may introduce errors due to the noise in raw records, and thus degrade the downstream prediction performance. To overcome these challenges, we propose the time-aware dual attention and memory-augmented network (DAMA) with stochastic generative imputation (SGI). Our model constructs a joint task learning architecture that unifies imputation and classification tasks collaboratively. First, we design a new time-aware DAMA that accounts for irregular sampling rates, inherent data nonalignment, and sparse values in IASS-MTS data. The proposed network integrates both attention and memory to effectively analyze complex interactions within and across IASS-MTS for the classification task. Second, we develop the stochastic generative imputation (SGI) network that uses auxiliary information from sequence data for inferring the time series missing observations. By balancing joint tasks, our model facilitates interaction between them, leading to improved performance on both classification and imputation tasks. Third, we evaluate our model on real-world datasets and demonstrate its superior performance in terms of imputation accuracy and classification results, outperforming the baselines.