PAPER

Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing

"Predict with the highest accuracy the probability of a user making a correct/incorrect answer to an unsolved question"

Read Paper >

image01

We apply the Transformer deep-learning model, mainly used in natural language processing, to Knowledge Tracing, one of the most fundamental tasks in AI education. We have tuned the Transformer model to fit the education domain with Riiid’s Separated Self-Attentive Neural Knowledge Tracing (SAINT) model.

image02

Given the history of how a user responded to mock test questions, SAINT calculates and predicts the probability of the user giving either correct or incorrect answers to more than 10,000 questions that the user has not solved, based on more than 200 million existing learning behavior data points.

Our results show that SAINT outperforms all other competitors and achieves state-of-the-art performance in knowledge tracing as measured by AUC, a commonly used statistic for model comparison, with an improvement of 1.8% compared to the current state-of-the-art model, Self Attentive Knowledge Tracing (SAKT).