AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling
Conati, Cristina, Porayska-Pomsta, Kaska, Mavrikis, Manolis
–arXiv.org Artificial Intelligence
Interpretability of the underlying AI representations is a key raison d'\^{e}tre for Open Learner Modelling (OLM) -- a branch of Intelligent Tutoring Systems (ITS) research. OLMs provide tools for 'opening' up the AI models of learners' cognition and emotions for the purpose of supporting human learning and teaching. Over thirty years of research in ITS (also known as AI in Education) produced important work, which informs about how AI can be used in Education to best effects and, through the OLM research, what are the necessary considerations to make it interpretable and explainable for the benefit of learning. We argue that this work can provide a valuable starting point for a framework of interpretable AI, and as such is of relevance to the application of both knowledge-based and machine learning systems in other high-stakes contexts, beyond education.
arXiv.org Artificial Intelligence
Jun-30-2018
- Country:
- Europe
- Sweden > Stockholm
- Stockholm (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Sweden > Stockholm
- North America
- Canada > British Columbia (0.04)
- United States > Florida
- Orange County > Orlando (0.04)
- Europe
- Genre:
- Instructional Material (0.69)
- Research Report (0.64)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning (1.00)
- Representation & Reasoning
- Agents (0.46)
- Expert Systems (0.66)
- Information Technology > Artificial Intelligence