Sankaranarayanan

AAAI Conferences

Despite studies showing collaboration to be beneficial both in terms of student satisfaction and learning, isolation is the norm in MOOCs. Two problems limiting the success of collaboration in MOOCs are the lack of support for team formation and structured collaboration support. Lack of support and strategies for team formation prevents teams from being set up for success from the beginning. Lack of structured support during synchronous collaboration has been demonstrated to produce significantly less learning than supported collaboration. This paper describes a deliberation based team formation approach and a scripted collaboration framework for MOOCs aimed at addressing these problems under the umbrella of Discussion Affordances for Natural Collaborative Exchange (DANCE) whose overarching focus is the enhancement of team-based MOOCs. These two examples of current work have been used as illustrations of insights informing interventions in MOOCs.


Intelligent Conversational Agents as Facilitators and Coordinators for Group Work in Distributed Learning Environments (MOOCs)

AAAI Conferences

Artificially intelligent conversational agents have been demonstrated to positively impact team based learning in classrooms and hold even greater potential for impact in the now widespread Massive Open Online Courses (MOOCs) if certain challenges can be overcome. These challenges include team formation, coordination and management of group processes in teams working together while distributed both in time and space. Our work begins with an architecture for orchestrating conversational agent based support for group learning called Bazaar, which has facilitated numerous successful studies of learning in the past including some early investigations in MOOC contexts. In this paper, we briefly describe our experience in designing, developing and deploying agent supported collaborative learning activities in 3 different MOOCs in three iterations. Findings from this iterative design process provide an empirical foundation for a reusable framework for facilitating similar activities in future MOOCs.


Coordinating Collaborative Chat in Massive Open Online Courses

arXiv.org Artificial Intelligence

An earlier study of a collaborative chat intervention in a Massive Open Online Course (MOOC) identified negative effects on attrition stemming from a requirement for students to be matched with exactly one partner prior to beginning the activity. That study raised questions about how to orchestrate a collaborative chat intervention in a MOOC context in order to provide the benefit of synchronous social engagement without the coordination difficulties. In this paper we present a careful analysis of an intervention designed to overcome coordination difficulties by welcoming students into the chat on a rolling basis as they arrive rather than requiring them to be matched with a partner before beginning. The results suggest the most positive impact when experiencing a chat with exactly one partner rather than more or less. A qualitative analysis of the chat data reveals differential experiences between these configurations that suggests a potential explanation for the effect and raises questions for future research.


Linguistic Reflections of Student Engagement in Massive Open Online Courses

AAAI Conferences

While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time.  Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course.  As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts.  We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered.  Theoretical and practical implications will be discussed.


How Widely Can Prediction Models be Generalized? An Analysis of Performance Prediction in Blended Courses

arXiv.org Machine Learning

Blended courses that mix in-person instruction with online platforms are increasingly popular in secondary education. These tools record a rich amount of data on students' study habits and social interactions. Prior research has shown that these metrics are correlated with students' performance in face to face classes. However, predictive models for blended courses are still limited and have not yet succeeded at early prediction or cross-class predictions even for repeated offerings of the same course. In this work, we use data from two offerings of two different undergraduate courses to train and evaluate predictive models on student performance based upon persistent student characteristics including study habits and social interactions. We analyze the performance of these models on the same offering, on different offerings of the same course, and across courses to see how well they generalize. We also evaluate the models on different segments of the courses to determine how early reliable predictions can be made. This work tells us in part how much data is required to make robust predictions and how cross-class data may be used, or not, to boost model performance. The results of this study will help us better understand how similar the study habits, social activities, and the teamwork styles are across semesters for students in each performance category. These trained models also provide an avenue to improve our existing support platforms to better support struggling students early in the semester with the goal of providing timely intervention.