Umarova, Khonzoda
How Problematic Writer-AI Interactions (Rather than Problematic AI) Hinder Writers' Idea Generation
Umarova, Khonzoda, Wise, Talia, Lyu, Zhuoer, Lee, Mina, Yang, Qian
Writing about a subject enriches writers' understanding of that subject. This cognitive benefit of writing -- known as constructive learning -- is essential to how students learn in various disciplines. However, does this benefit persist when students write with generative AI writing assistants? Prior research suggests the answer varies based on the type of AI, e.g., auto-complete systems tend to hinder ideation, while assistants that pose Socratic questions facilitate it. This paper adds an additional perspective. Through a case study, we demonstrate that the impact of genAI on students' idea development depends not only on the AI but also on the students and, crucially, their interactions in between. Students who proactively explored ideas gained new ideas from writing, regardless of whether they used auto-complete or Socratic AI assistants. Those who engaged in prolonged, mindless copyediting developed few ideas even with a Socratic AI. These findings suggest opportunities in designing AI writing assistants, not merely by creating more thought-provoking AI, but also by fostering more thought-provoking writer-AI interactions.
Task-specific Language Modeling for Selecting Peer-written Explanations
Mustafaraj, Eni (Wellesley College) | Umarova, Khonzoda (Wellesley College) | Turbak, Franklyn (Wellesley College) | Lee, Sohie (Wellesley College)
Students who are learning to program, often write "buggy" code, especially when they are solving problems on paper. Such bugs can be used as a pedagogical device to engage students in reading and debugging tasks. One can take this a step further and require students to explain in writing how the bugs affect the code. Such written explanations can indicate students' current level of computational thinking, and concurrently be used in intelligent systems that leverage "learnersourcing", the process of generating course material for other learners. In this paper, we discuss how to combine learning analytics techniques and artificial intelligence (AI) algorithms to help an intelligent system distinguish between strong and weak textual explanations.