The days of gauging learning success on the basis of simple metrics like enrollment, attendance, or course completion are long gone! Today, the most important metric is whether your institution is able to successfully deliver its curriculum -- and whether the students were successful in getting what they expected from the course. In other words, the metric that matters the most is student academic outcomes. How can you ensure that your courses meet expectations? The answer is Big Data.
High schools around the country are increasingly turning to external, for-profit providers for "online credit recovery." These courses, taken on a computer, offer students who have failed a course a second chance to earn credits they need for graduation, whether after school, in the summer or during the school year. In some districts, it's an important part of efforts to raise graduation rates, as we wrote about in our Graduation Rates project last year. Today, the first large-scale, randomized controlled trial of student performance in these courses is out from the American Institutes of Research, and the news is not great. AIR followed 1,224 freshmen in the Chicago public schools, randomly assigned in the summers of 2011 and 2012 to retake second-semester algebra either face-to-face or on a computer.
Ramesh, Arti (University Of Maryland, College Park) | Goldwasser, Dan (University of Maryland, College Park) | Huang, Bert (University of Maryland, College Park) | III, Hal Daume (University of Maryland, College Park) | Getoor, Lise (University of California, Santa Cruz)
Maintaining and cultivating student engagement is critical for learning. Understanding factors affecting student engagement will help in designing better courses and improving student retention. The large number of participants in massive open online courses (MOOCs) and data collected from their interaction with the MOOC open up avenues for studying student engagement at scale. In this work, we develop a framework for modeling and understanding student engagement in online courses based on student behavioral cues. Our first contribution is the abstraction of student engagement types using latent representations and using that in a probabilistic model to connect student behavior with course completion. We demonstrate that the latent formulation for engagement helps in predicting student survival across three MOOCs. Next, in order to initiate better instructor interventions, we need to be able to predict student survival early in the course. We demonstrate that we can predict student survival early in the course reliably using the latent model. Finally, we perform a closer quantitative analysis of user interaction with the MOOC and identify student activities that are good indicators for survival at different points in the course.
A novel machine learning model could help predict mortality and neurological outcomes post-cardiac arrest, according to a new Johns Hopkins study. Presented at the Society of Critical Care Medicine's 49th Annual Critical Care Congress in Orlando, FL, study results indicate the new model demonstrated significantly improved prediction capabilities compared to the reference APACHE model. "The objectives of our study were to first predict the neurological outcome and mortality at discharge using data only from the first 24 hours of ICU admission and the second objective was to determine whether utilizing physiologic time series (PTS) data, specifically just features from the bedside monitoring data, are useful in terms of model performance," said lead investigator Hanbiehn Kim, MBE, of Johns Hopkins University, during his presentation. Using the Philips eICU database, which includes over 200,000 patients from 208 hospitals, Kim and colleagues from Johns Hopkins Hospital extracted data on cardiac arrest patients who were mechanically ventilated. Of note, this database includes PTS data from patient bedside bio-monitors that recorded heart rate, oxygen saturation, blood pressure, and respiratory rate at 5-minute intervals.
Massive Open Online Courses (MOOCs) have received widespread attention for their potential to scale higher education, with multiple platforms such as Coursera, edX and Udacity recently appearing. Despite their successes, a major problem faced by MOOCs is low completion rates. In this paper, we explore the accurate early identification of students who are at risk of not completing courses. We build predictive models weekly, over multiple offerings of a course. Furthermore, we envision student interventions that present meaningful probabilities of failure, enacted only for marginal students.To be effective, predicted probabilities must be both well-calibrated and smoothed across weeks.Based on logistic regression, we propose two transfer learning algorithms to trade-off smoothness and accuracy by adding a regularization term to minimize the difference of failure probabilities between consecutive weeks. Experimental results on two offerings of a Coursera MOOC establish the effectiveness of our algorithms.