Most intelligent learning support systems, in particular Intelligent Tutoring Systems (ITS), assume that "knowledge" is a predefined entity. This assumption applies not only to the knowledge that humanity possesses in a particular domain (e.g., our knowledge of mathematics) but also the knowledge that particular individuals must have in order to handle given tasks or understand productively a given domain. Thus an ITS typically has a knowledge base consisting, for example, of the concepts and procedures that high school students are supposed to need in order to carry out routine computational tasks or that math majors are supposed to need in order to demonstrate a theorem. The ITS interacts with users in such a way that the "needed" concepts and procedures become operative in them. What this comes down to, then, is a fundamentally mechanical model of knowledge transmission. No matter how open-ended and adaptive a typical ITS may be, what is inside the system is meant to end up inside the heads of the students.
The challenges of effective health risk communication are well known. This paper provides pointers to the health communication literature that discuss these problems. Tailoring printed information, visual displays, and interactive multimedia have been proposed in the health communication literature as promising approaches. On-line risk communication applications are increasing on the internet. However, potential effectiveness of applications using conventional computer technology is limited. We propose that use of artificial intelligence, building upon research in Intelligent Tutoring Systems, might be able to overcome these limitations.
The crisis in science education and the need for innovative computer-based learning environments has prompted us to develop a multi-agent system, Betty's Brain that implements the learning by teaching paradigm. The design and implementation of the system based on cognitive science and education research in constructivist, inquiry-based learning, involves an intelligent software agent, Betty, that students teach using concept map representations with a visual interface. Betty is intelligent not because she learns on her own, but because she can apply qualitative-reasoning techniques to answer questions that are directly related to what she has been taught. The results of an extensive study in a fifth grade classroom of a Nashville public school has demonstrated impressive results in terms of improved motivation and learning gains. Reflection on the results has prompted us to develop a new version of this system that focuses on formative assessment and the teaching of selfregulated strategies to improve students' learning, and promote better understanding and transfer.
Intelligent tutoring systems (ITS) can provide effective instruction, but learners do not always use such systems effectively. In the present study, high school students' action sequences with a mathematics ITS were machineclassified into five finite-state machines indicating guessing strategies, appropriate help use, and independent problem solving; over 90% of problem events were categorized. Students were grouped via cluster analyses based on self reports of motivation. Motivation grouping predicted ITS strategic approach better than prior math achievement (as rated by classroom teachers). Learners who reported being disengaged in math were most likely to exhibit appropriate help use while working with the ITS, relative to average and high motivation learners. The results indicate that learners can readily report their motivation state and that these data predict how learners interact with the ITS.
We present LearningQ, a challenging educational question generation dataset containing over 230K document-question pairs. It includes 7K instructor-designed questions assessing knowledge concepts being taught and 223K learner-generated questions seeking in-depth understanding of the taught concepts. We show that, compared to existing datasets that can be used to generate educational questions, LearningQ (i) covers a wide range of educational topics and (ii) contains long and cognitively demanding documents for which question generation requires reasoning over the relationships between sentences and paragraphs. As a result, a significant percentage of LearningQ questions (~30%) require higher-order cognitive skills to solve (such as applying, analyzing), in contrast to existing question-generation datasets that are designed mostly for the lowest cognitive skill level (i.e. remembering). To understand the effectiveness of existing question generation methods in producing educational questions, we evaluate both rule-based and deep neural network based methods on LearningQ. Extensive experiments show that state-of-the-art methods which perform well on existing datasets cannot generate useful educational questions. This implies that LearningQ is a challenging test bed for the generation of high-quality educational questions and worth further investigation. We open-source the dataset and our codes at https://dataverse.mpi-sws.org/dataverse/icwsm18.