Deep Learning to Predict Student Outcomes

arXiv.org Machine Learning

The increasingly fast development cycle for online course contents, along with the diverse student demographics in each online classroom, make real-time student outcomes prediction an interesting topic for both industrial research and practical needs. In this paper, we tackle the problem of real-time student performance prediction in an on-going course using a domain adaptation framework. This framework is a system trained on labeled student outcome data from previous coursework but is meant to be deployed on another course. In particular, we introduce a GritNet architecture, and develop an unsupervised domain adaptation method to transfer a GritNet trained on a past course to a new course without any student outcome label. Our results for real Udacity student graduation predictions show that the GritNet not only generalizes well from one course to another across different Nanodegree programs, but also enhances real-time predictions explicitly in the first few weeks when accurate predictions are most challenging.


GritNet: Student Performance Prediction with Deep Learning

arXiv.org Machine Learning

Student performance prediction - where a machine forecasts the future performance of students as they interact with online coursework - is a challenging problem. Reliable early-stage predictions of a student's future performance could be critical to facilitate timely educational interventions during a course. However, very few prior studies have explored this problem from a deep learning perspective. In this paper, we recast the student performance prediction problem as a sequential event prediction problem and propose a new deep learning based algorithm, termed GritNet, which builds upon the bidirectional long short term memory (BLSTM). Our results, from real Udacity students' graduation predictions, show that the GritNet not only consistently outperforms the standard logistic-regression based method, but that improvements are substantially pronounced in the first few weeks when accurate predictions are most challenging.


What's happened in MOOC Posts Analysis, Knowledge Tracing and Peer Feedbacks? A Review

arXiv.org Artificial Intelligence

Learning Management Systems (LMS) and Educational Data Mining (EDM) are two important parts of online educational environment with the former being a centralised web-based information systems where the learning content is managed and learning activities are organised (Stone and Zheng,2014) and latter focusing on using data mining techniques for the analysis of data so generated. As part of this work, we present a literature review of three major tasks of EDM (See section 2), by identifying shortcomings and existing open problems, and a Blumenfield chart (See section 3). The consolidated set of papers and resources so used are released in https://github.com/manikandan-ravikiran/cs6460-Survey. The coverage statistics and review matrix of the survey are as shown in Figure 1 & Table 1 respectively. Acronym expansions are added in the Appendix Section 4.1.


Transfer Learning using Representation Learning in Massive Open Online Courses

arXiv.org Machine Learning

In a Massive Open Online Course (MOOC), predictive models of student behavior can support multiple aspects of learning, including instructor feedback and timely intervention. Ongoing courses, when the student outcomes are yet unknown, must rely on models trained from the historical data of previously offered courses. It is possible to transfer models, but they often have poor prediction performance. One reason is features that inadequately represent predictive attributes common to both courses. We present an automated transductive transfer learning approach that addresses this issue. It relies on problem-agnostic, temporal organization of the MOOC clickstream data, where, for each student, for multiple courses, a set of specific MOOC event types is expressed for each time unit. It consists of two alternative transfer methods based on representation learning with auto-encoders: a passive approach using transductive principal component analysis and an active approach that uses a correlation alignment loss term. With these methods, we investigate the transferability of dropout prediction across similar and dissimilar MOOCs and compare with known methods. Results show improved model transferability and suggest that the methods are capable of automatically learning a feature representation that expresses common predictive characteristics of MOOCs.


Academic Performance Estimation with Attention-based Graph Convolutional Networks

arXiv.org Artificial Intelligence

Student's academic performance prediction empowers educational technologies including academic trajectory and degree planning, course recommender systems, early warning and advising systems. Given a student's past data (such as grades in prior courses), the task of student's performance prediction is to predict a student's grades in future courses. Academic programs are structured in a way that prior courses lay the foundation for future courses. The knowledge required by courses is obtained by taking multiple prior courses, which exhibits complex relationships modeled by graph structures. Traditional methods for student's performance prediction usually neglect the underlying relationships between multiple courses; and how students acquire knowledge across them. In addition, traditional methods do not provide interpretation for predictions needed for decision making. In this work, we propose a novel attention-based graph convolutional networks model for student's performance prediction. We conduct extensive experiments on a real-world dataset obtained from a large public university. The experimental results show that our proposed model outperforms state-of-the-art approaches in terms of grade prediction. The proposed model also shows strong accuracy in identifying students who are at-risk of failing or dropping out so that timely intervention and feedback can be provided to the student.