Dual-Channel Adaptive Scale Hypergraph Encoders With Cross-View Contrastive Learning for Knowledge Tracing

IEEE Trans Neural Netw Learn Syst. 2024 Apr 23:PP. doi: 10.1109/TNNLS.2024.3386810. Online ahead of print.

Abstract

Knowledge tracing (KT) refers to predicting learners' performance in the future according to their historical responses, which has become an essential task in intelligent tutoring systems. Most deep learning-based methods usually model the learners' knowledge states via recurrent neural networks (RNNs) or attention mechanisms. Recently emerging graph neural networks (GNNs) assist the KT model to capture the relationships such as question-skill and question-learner. However, non-pairwise and complex higher-order information among responses is ignored. In addition, a single-channel encoded hidden vector struggles to represent multigranularity knowledge states. To tackle the above problems, we propose a novel KT model named dual-channel adaptive scale hypergraph encoders with cross-view contrastive learning (HyperKT). Specifically, we design an adaptive scale hyperedge distillation component for generating knowledge-aware hyperedges and pattern-aware hyperedges that reflect non-pairwise higher-order features among responses. Then, we propose dual-channel hypergraph encoders to capture multigranularity knowledge states from global and local state hypergraphs. The encoders consist of a simplified hypergraph convolution network and a collaborative hypergraph convolution network. To enhance the supervisory signal in the state hypergraphs, we introduce the cross-view contrastive learning mechanism, which performs among state hypergraph views and their transformed line graph views. Extensive experiments on three real-world datasets demonstrate the superior performance of our HyperKT over the state-of-the-art (SOTA).