Embedding cognitive framework with self-attention for interpretable knowledge tracing

Sci Rep. 2022 Oct 20;12(1):17536. doi: 10.1038/s41598-022-22539-9.

Abstract

Recently, deep neural network-based cognitive models such as deep knowledge tracing have been introduced into the field of learning analytics and educational data mining. Despite an accurate predictive performance of such models, it is challenging to interpret their behaviors and obtain an intuitive insight into latent student learning status. To address these challenges, this paper proposes a new learner modeling framework named the EAKT, which embeds a structured cognitive model into a transformer. In this way, the EAKT not only can achieve an excellent prediction result of learning outcome but also can depict students' knowledge state on a multi-dimensional knowledge component(KC) level. By performing the fine-grained analysis of the student learning process, the proposed framework provides better explanatory learner models for designing and implementing intelligent tutoring systems. The proposed EAKT is verified by experiments. The performance experiments show that the EAKT can better predict the future performance of student learning(more than 2.6% higher than the baseline method on two of three real-world datasets). The interpretability experiments demonstrate that the student knowledge state obtained by EAKT is closer to ground truth than other models, which means EAKT can more accurately trace changes in the students' knowledge state.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention
  • Cognition
  • Humans
  • Knowledge*
  • Learning
  • Neural Networks, Computer*