Human-Centric eXplainable AI in Education
Maity, Subhankar, Deroy, Aniket
–arXiv.org Artificial Intelligence
As artificial intelligence (AI) becomes more integrated into educational environments, how can we ensure that these systems are both understandable and trustworthy? The growing demand for explainability in AI systems is a critical area of focus. This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape, emphasizing its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools, particularly through the innovative use of large language models (LLMs). What challenges arise in the implementation of explainable AI in educational contexts? It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement, ensuring that educators and students can effectively interact with these technologies. Furthermore, what steps can educators, developers, and policymakers take to create more effective, inclusive, and ethically responsible AI solutions in education? The paper provides targeted recommendations to address this question, highlighting the necessity of prioritizing explainability. By doing so, how can we leverage AI's transformative potential to foster equitable and engaging educational experiences that support diverse learners? The rapid advancement of AI technologies has transformed various sectors, including education, by introducing innovative solutions that enhance teaching and learning experiences. In recent years, AI systems have increasingly been utilized for personalized learning, assessment, and feedback mechanisms (Maghsudi et al., 2021; Maity and Deroy, 2024a; Maity and Deroy, 2024b).
arXiv.org Artificial Intelligence
Oct-18-2024
- Country:
- North America > United States (0.93)
- Genre:
- Instructional Material (0.69)
- Research Report (1.00)
- Industry:
- Technology: