Graesser, Art
Generative AI in Education: From Foundational Insights to the Socratic Playground for Learning
Hu, Xiangen, Xu, Sheng, Tong, Richard, Graesser, Art
This paper explores the synergy between human cognition and Large Language Models (LLMs), highlighting how generative AI can drive personalized learning at scale. We discuss parallels between LLMs and human cognition, emphasizing both the promise and new perspectives on integrating AI systems into education. After examining challenges in aligning technology with pedagogy, we review AutoTutor-one of the earliest Intelligent Tutoring Systems (ITS)-and detail its successes, limitations, and unfulfilled aspirations. We then introduce the Socratic Playground, a next-generation ITS that uses advanced transformer-based models to overcome AutoTutor's constraints and provide personalized, adaptive tutoring. To illustrate its evolving capabilities, we present a JSON-based tutoring prompt that systematically guides learner reflection while tracking misconceptions. Throughout, we underscore the importance of placing pedagogy at the forefront, ensuring that technology's power is harnessed to enhance teaching and learning rather than overshadow it.
A Conversational Intelligent Agent for Career Guidance and Counseling
Hampton, Andrew (University of Memphis) | Rus, Vasile (University of Memphis) | Andrasik, Frank (University of Memphis) | Nye, Benjamin (University of Southern California) | Graesser, Art (University of Memphis)
Navigating a career constitutes one of life’s most enduring challenges, particularly within a unique organization like the US Navy. While the Navy has numerous resources for guidance, accessing and identifying key information sources across the many existing platforms can be challenging for sailors (e.g., determining the appropriate program or point of contact, developing an accurate understanding of the process, and even recognizing the need for planning itself). Focusing on intermediate goals, evaluations, education, certifications, and training is quite demanding, even before considering their cumulative long-term implications. These are on top of generic personal issues, such as financial difficulties and homesickness when at sea for prolonged periods. We present the preliminary construction of a conversational intelligent agent designed to provide a user-friendly, adaptive environment that recognizes user input pertinent to these issues and provides guidance to appropriate resources within the Navy. User input from “counseling sessions” is linked, using advanced natural language processing techniques, to our framework of Navy training and education standards, promotion protocols, and organizational structure, producing feedback on resources and recommendations sensitive to user history and stated career goals. The proposed innovative technology monitors sailors’ career progress, proactively triggering sessions before major career milestones or when performance drops below Navy expectations, by using a mixed-initiative design. System-triggered sessions involve positive feedback and informative dialogues (using existing Navy career guidance protocols). The intelligent agent also offers counseling for personal problems, triggering targeted dialogues designed to gather more information, offer tailored suggestions, and provide referrals to appropriate resources or to a human counselor when in-depth counseling is warranted. This software, currently in alpha testing, has the potential to serve as a centralized information hub, engaging and encouraging sailors to take ownership of their career paths in the most efficient way possible, benefiting both individuals and the Navy as a whole.
Malleability of Students’ Perceptions of an Affect-Sensitive Tutor and Its Influence on Learning
D' (University of Notre Dame) | Mello, Sidney (University of Memphis) | Graesser, Art
We evaluated an affect-sensitive version of AutoTutor, a dialogue based ITS that simulates human tutors. While the original AutoTutor is sensitive to students’ cognitive states, the affect-sensitive tutor (Supportive tutor) also responds to students’ affective states (boredom, confusion, and frustration) with empathetic, encouraging, and motivational dialogue moves that are accompanied by appropriate emotional expressions. We conducted an experiment that compared the Supportive and Regular (non-affective) tutors over two 30-minute learning sessions with respect to perceived effectiveness, fidelity of cognitive and emotional feedback, engagement, and enjoyment. The results indicated that, irrespective of tutor, students’ ratings of engagement, enjoyment, and perceived learning decreased across sessions, but these ratings were not correlated with actual learning gains. In contrast, students’ perceptions of how closely the computer tutors resembled human tutors increased across learning sessions, was related to the quality of tutor feedback, the increase was greater for the Supportive tutor, and was a powerful predictor of learning. Implications of our findings for the design of affect-sensitive ITSs are discussed.