Goto

Collaborating Authors

 mooc


Handling Students Dropouts in an LLM-driven Interactive Online Course Using Language Models

Wang, Yuanchun, Fu, Yiyang, Yu, Jifan, Zhang-Li, Daniel, Zhang, Zheyuan, Yin, Joy Lim Jia, Wang, Yucheng, Zhou, Peng, Zhang, Jing, Liu, Huiqin

arXiv.org Artificial Intelligence

Interactive online learning environments, represented by Massive AI-empowered Courses (MAIC), leverage LLM-driven multi-agent systems to transform passive MOOCs into dynamic, text-based platforms, enhancing interactivity through LLMs. This paper conducts an empirical study on a specific MAIC course to explore three research questions about dropouts in these interactive online courses: (1) What factors might lead to dropouts? (2) Can we predict dropouts? (3) Can we reduce dropouts? We analyze interaction logs to define dropouts and identify contributing factors. Our findings reveal strong links between dropout behaviors and textual interaction patterns. We then propose a course-progress-adaptive dropout prediction framework (CPADP) to predict dropouts with at most 95.4% accuracy. Based on this, we design a personalized email recall agent to re-engage at-risk students. Applied in the deployed MAIC system with over 3,000 students, the feasibility and effectiveness of our approach have been validated on students with diverse backgrounds.


Failure Risk Prediction in a MOOC: A Multivariate Time Series Analysis Approach

Ayady, Anass El, Devanne, Maxime, Forestier, Germain, Mawas, Nour El

arXiv.org Artificial Intelligence

MOOCs offer free and open access to a wide audience, but completion rates remain low, often due to a lack of personalized content. To address this issue, it is essential to predict learner performance in order to provide tailored feedback. Behavioral traces-such as clicks and events-can be analyzed as time series to anticipate learners' outcomes. This work compares multivariate time series classification methods to identify at-risk learners at different stages of the course (after 5, 10 weeks, etc.). The experimental evaluation, conducted on the Open University Learning Analytics Dataset (OULAD), focuses on three courses: two in STEM and one in SHS. Preliminary results show that the evaluated approaches are promising for predicting learner failure in MOOCs. The analysis also suggests that prediction accuracy is influenced by the amount of recorded interactions, highlighting the importance of rich and diverse behavioral data.


Bridging MOOCs, Smart Teaching, and AI: A Decade of Evolution Toward a Unified Pedagogy

Yuan, Bo, Hu, Jiazi

arXiv.org Artificial Intelligence

-- Over the past decade, higher education ha s evolved through three distinct paradigms: the emergence of Massive Open Online Courses (MOOCs), the integration of Smart Teaching technologies into classrooms, and the rise of AI - enhanced learning . Each paradigm is intended to address specific challenges in traditional education: MOOCs enable ubiquitous access to learning resources; Smart Teaching supports real - time interaction with data - driven insights; and generative AI offers personalized feedback and on - demand content generation. However, the se paradigms are often implemented in isol ation due to the ir disparate technological origins and policy - driven adoption . This paper examines the origins, strengths, and limitations of each paradigm, and advocates a unified pedagogical perspective that synthesizes their complementary affordances. W e propose a three - layer instructional framework that combines the scalability of MOOCs, the responsiveness of Smart Teaching, and the adaptivity of AI . To demonstrate its feasibility, we present a curriculum design for a project - based course . The findings highlight the framework's potential to enhance learner engagement, support instructors, and enable personalized yet scalable learning. T he landscape of higher education h as undergone multiple waves of digital transformation over the past decade .


Leveraging Graph Retrieval-Augmented Generation to Support Learners' Understanding of Knowledge Concepts in MOOCs

Abdelmagied, Mohamed, Chatti, Mohamed Amine, Joarder, Shoeb, Ain, Qurat Ul, Alatrash, Rawaa

arXiv.org Artificial Intelligence

Massive Open Online Courses (MOOCs) lack direct interaction between learners and instructors, making it challenging for learners to understand new knowledge concepts. Recently, learners have increasingly used Large Language Models (LLMs) to support them in acquiring new knowledge. However, LLMs are prone to hallucinations which limits their reliability. Retrieval-Augmented Generation (RAG) addresses this issue by retrieving relevant documents before generating a response. However, the application of RAG across different MOOCs is limited by unstructured learning material. Furthermore, current RAG systems do not actively guide learners toward their learning needs. To address these challenges, we propose a Graph RAG pipeline that leverages Educational Knowledge Graphs (EduKGs) and Personal Knowledge Graphs (PKGs) to guide learners to understand knowledge concepts in the MOOC platform CourseMapper. Specifically, we implement (1) a PKG-based Question Generation method to recommend personalized questions for learners in context, and (2) an EduKG-based Question Answering method that leverages the relationships between knowledge concepts in the EduKG to answer learner selected questions. To evaluate both methods, we conducted a study with 3 expert instructors on 3 different MOOCs in the MOOC platform CourseMapper. The results of the evaluation show the potential of Graph RAG to empower learners to understand new knowledge concepts in a personalized learning experience.


Enhancing Collaborative Filtering-Based Course Recommendations by Exploiting Time-to-Event Information with Survival Analysis

Gharahighehi, Alireza, Ghinis, Achilleas, Venturini, Michela, Cornillie, Frederik, Vens, Celine

arXiv.org Artificial Intelligence

These authors contributed equally to this work. Abstract Massive Open Online Courses (MOOCs) are emerging as a popular alternative to traditional education, offering learners the flexibility to access a wide range of courses from various disciplines, anytime and anywhere. To enhance learner engagement, it is crucial to recommend courses that align with their preferences and needs. Course Recommender Systems (RSs) can play an important role in this by modeling learners' preferences based on their previous interactions within the MOOC platform. Time-to-dropout and time-to-completion in MOOCs, like other time-to-event prediction tasks, can be effectively modeled using survival analysis (SA) methods. In this study, we apply SA methods to improve collaborative filtering recommendation performance by considering time-to-event in the context of MOOCs. The findings underscore the potential of integrating SA methods with RSs to enhance personalization in MOOCs. Keywords: recommendation systems, survival analysis, massive open online course, personalized learning, dropout 1 Introduction Massive Open Online Courses (MOOCs) platforms offer a diverse range of online courses to learners around the globe, promoting equitable education by breaking down barriers related to geography and time.


Perspective Chapter: MOOCs in India: Evolution, Innovation, Impact, and Roadmap

Das, Partha Pratim

arXiv.org Artificial Intelligence

With the largest population of the world and one of the highest enrolments in higher education, India needs efficient and effective means to educate its learners. India started focusing on open and digital education in 1980's and its efforts were escalated in 2009 through the NMEICT program of the Government of India. A study by the Government and FICCI in 2014 noted that India cannot meet its educational needs just by capacity building in brick and mortar institutions. It was decided that ongoing MOOCs projects under the umbrella of NMEICT will be further strengthened over its second (2017-21) and third (2021-26) phases. NMEICT now steers NPTEL or SWAYAM (India's MOOCs) and several digital learning projects including Virtual Labs, e-Yantra, Spoken Tutorial, FOSSEE, and National Digital Library on India - the largest digital education library in the world. Further, India embraced its new National Education Policy in 2020 to strongly foster online education. In this chapter, we take a deep look into the evolution of MOOCs in India, its innovations, its current status and impact, and the roadmap for the next decade to address its challenges and grow. AI-powered MOOCs is an emerging opportunity for India to lead MOOCs worldwide.


Using Large Language Models for Automated Grading of Student Writing about Science

Impey, Chris, Wenger, Matthew, Garuda, Nikhil, Golchin, Shahriar, Stamer, Sarah

arXiv.org Artificial Intelligence

Assessing writing in large classes for formal or informal learners presents a significant challenge. Consequently, most large classes, particularly in science, rely on objective assessment tools such as multiple-choice quizzes, which have a single correct answer. The rapid development of AI has introduced the possibility of using large language models (LLMs) to evaluate student writing. An experiment was conducted using GPT-4 to determine if machine learning methods based on LLMs can match or exceed the reliability of instructor grading in evaluating short writing assignments on topics in astronomy. The audience consisted of adult learners in three massive open online courses (MOOCs) offered through Coursera. One course was on astronomy, the second was on astrobiology, and the third was on the history and philosophy of astronomy. The results should also be applicable to non-science majors in university settings, where the content and modes of evaluation are similar. The data comprised answers from 120 students to 12 questions across the three courses. GPT-4 was provided with total grades, model answers, and rubrics from an instructor for all three courses. In addition to evaluating how reliably the LLM reproduced instructor grades, the LLM was also tasked with generating its own rubrics. Overall, the LLM was more reliable than peer grading, both in aggregate and by individual student, and approximately matched instructor grades for all three online courses. The implication is that LLMs may soon be used for automated, reliable, and scalable grading of student science writing.


From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents

Yu, Jifan, Zhang, Zheyuan, Zhang-li, Daniel, Tu, Shangqing, Hao, Zhanxin, Li, Rui Miao, Li, Haoxuan, Wang, Yuanchun, Li, Hanming, Gong, Linlu, Cao, Jie, Lin, Jiayin, Zhou, Jinchang, Qin, Fei, Wang, Haohua, Jiang, Jianxiao, Deng, Lijun, Zhan, Yisi, Xiao, Chaojun, Dai, Xusheng, Yan, Xuan, Lin, Nianyi, Zhang, Nan, Ni, Ruixin, Dang, Yang, Hou, Lei, Zhang, Yu, Han, Xu, Li, Manli, Li, Juanzi, Liu, Zhiyuan, Liu, Huiqin, Sun, Maosong

arXiv.org Artificial Intelligence

Since the first instances of online education, where courses were uploaded to accessible and shared online platforms, this form of scaling the dissemination of human knowledge to reach a broader audience has sparked extensive discussion and widespread adoption. Recognizing that personalized learning still holds significant potential for improvement, new AI technologies have been continuously integrated into this learning format, resulting in a variety of educational AI applications such as educational recommendation and intelligent tutoring. The emergence of intelligence in large language models (LLMs) has allowed for these educational enhancements to be built upon a unified foundational model, enabling deeper integration. In this context, we propose MAIC (Massive AI-empowered Course), a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom, balancing scalability with adaptivity. Beyond exploring the conceptual framework and technical innovations, we conduct preliminary experiments at Tsinghua University, one of China's leading universities. Drawing from over 100,000 learning records of more than 500 students, we obtain a series of valuable observations and initial analyses. This project will continue to evolve, ultimately aiming to establish a comprehensive open platform that supports and unifies research, technology, and applications in exploring the possibilities of online education in the era of large model AI. We envision this platform as a collaborative hub, bringing together educators, researchers, and innovators to collectively explore the future of AI-driven online education.


Grading Massive Open Online Courses Using Large Language Models

Golchin, Shahriar, Garuda, Nikhil, Impey, Christopher, Wenger, Matthew

arXiv.org Artificial Intelligence

Massive open online courses (MOOCs) offer free education globally to anyone with a computer and internet access. Despite this democratization of learning, the massive enrollment in these courses makes it impractical for one instructor to assess every student's writing assignment. As a result, peer grading, often guided by a straightforward rubric, is the method of choice. While convenient, peer grading often falls short in terms of reliability and validity. In this study, we explore the feasibility of using large language models (LLMs) to replace peer grading in MOOCs. Specifically, we use two LLMs, GPT-4 and GPT-3.5, across three MOOCs: Introductory Astronomy, Astrobiology, and the History and Philosophy of Astronomy. To instruct LLMs, we use three different prompts based on the zero-shot chain-of-thought (ZCoT) prompting technique: (1) ZCoT with instructor-provided correct answers, (2) ZCoT with both instructor-provided correct answers and rubrics, and (3) ZCoT with instructor-provided correct answers and LLM-generated rubrics. Tested on 18 settings, our results show that ZCoT, when augmented with instructor-provided correct answers and rubrics, produces grades that are more aligned with those assigned by instructors compared to peer grading. Finally, our findings indicate a promising potential for automated grading systems in MOOCs, especially in subjects with well-defined rubrics, to improve the learning experience for millions of online learners worldwide.


Visual Attention Analysis in Online Learning

Navarro, Miriam, Becerra, Álvaro, Daza, Roberto, Cobos, Ruth, Morales, Aythami, Fierrez, Julian

arXiv.org Artificial Intelligence

In this paper, we present an approach in the Multimodal Learning Analytics field. Within this approach, we have developed a tool to visualize and analyze eye movement data collected during learning sessions in online courses. The tool is named VAAD (an acronym for Visual Attention Analysis Dashboard). These eye movement data have been gathered using an eye-tracker and subsequently processed and visualized for interpretation. The purpose of the tool is to conduct a descriptive analysis of the data by facilitating its visualization, enabling the identification of differences and learning patterns among various learner populations. Additionally, it integrates a predictive module capable of anticipating learner activities during a learning session. Consequently, VAAD holds the potential to offer valuable insights into online learning behaviors from both descriptive and predictive perspectives.