A.I. could spot students struggling in educational games

A new artificial intelligence model can better predict how much students are learning in educational games, researchers report. The improved model makes use of an artificial intelligence (AI) training concept called multitask learning, an approach in which one model is…

A new artificial intelligence model can better predict how much students are learning in educational games, researchers report.

The improved model makes use of an artificial intelligence (AI) training concept called multitask learning, an approach in which one model is asked to perform multiple tasks. The new model could help to improve both instruction and learning outcomes. “In our case, we wanted the model to be able to predict whether a student would answer each question on a test correctly, based on the student’s behavior while playing an educational game called Crystal Island,” says coauthor Jonathan Rowe, a research scientist in North Carolina State University’s Center for Educational Informatics (CEI). “The standard approach for solving this problem looks only at overall test score, viewing the test as one task,” Rowe says. “In the context of our multitask learning framework, the model has 17 tasks—because the test has 17 questions.” The researchers had gameplay and testing data from 181 students. The artificial intelligence could look at each student’s gameplay and at how each student answered Question 1 on the test. By identifying common behaviors of students who answered Question 1 correctly, and common behaviors of students who got Question 1 wrong, the AI could determine how a new student would answer Question 1. The AI performs this function for every question at the same time; the gameplay reviewed for a given student remains the same, but the AI looks at that behavior in the context of Question 2, Question 3, and so on. And this multitask approach made a difference. The researchers found that the multitask model was about 10% more accurate than other models that relied on conventional AI training methods. “We envision this type of model being used in a couple of ways that can benefit students,” says first author Michael Geden, a postdoctoral researcher. “It could be used to notify teachers when a student’s gameplay suggests the student may need additional instruction. It could also be used to facilitate adaptive gameplay features in the game itself. For example, altering a storyline in order to revisit the concepts that a student is struggling with. “Psychology has long recognized that different questions have different values,” Geden says. “Our work here takes an interdisciplinary approach that marries this aspect of psychology with deep learning and machine learning approaches to AI.” “This also opens the door to incorporating more complex modeling techniques into educational software—particularly educational software that adapts to the needs of the student,” says Andrew Emerson, coauthor of the paper and a PhD student. The researchers will present their paper at the 34th AAAI Conference on Artificial Intelligence. Additional coauthors are from NC State and the University of Central Florida. Support for the work came from National Science Foundation and from the Social Sciences and Humanities Research Council of Canada. Source: NC State