Goto

Collaborating Authors

IsisWorld: An Open Source Commonsense Simulator for AI Researchers

AAAI Conferences

A metareasoning problem involves three parts: 1) a set of concrete problem domains; 2) reasoners to reason about the problems; and, 3) metareasoners to reason about the reasoners. We believe that the metareasoning community would benefit from agreeing on the first two problems. To support this kind of collaboration, we offer an open source 3D simulator containing everyday, commonsense problems that take place in kitchens. This paper presents several arguments for using a simulator to solve commonsense problems. The paper concludes by describing future work in simulator-based unified generative benchmarks for AI.


Toward an Integrated Metacognitive Architecture

AAAI Conferences

Researchers have studied problems in metacognition both in computers and in humans. In response some have implemented models of cognition and metacognitive activity in various architectures to test and better define specific theories of metacognition. However, current theories and implementations suffer from numerous problems and lack of detail. Here we illustrate the problems with two different computational approaches. The Meta-Cognitive Loop and Meta-AQUA both examine the metacognitive reasoning involved in monitoring and reasoning about failures of expectations, and they both learn from such experiences. But neither system presents a full accounting of the variety of known metacognitive phenomena, and, as far as we know, no extant system does. The problem is that no existing cognitive architecture directly addresses metacognition. Instead, current architectures were initially developed to study more narrow cognitive functions and only later were they modified to include higher level attributes. We claim that the solution is to develop a metacognitive architecture outright, and we begin to outline the structure that such a foundation might have.


Decentralised Metacognition in Context-Aware Autonomic Systems: Some Key Challenges

AAAI Conferences

A distributed non-hierarchical metacognitive architec- ture is one in which all meta-level reasoning compo- nents are subject to meta-level monitoring and manage- ment by other components. Such metacognitive distri- bution can support the robustness of distributed IT sys- tems in which humans and artificial agents are partic- ipants. However, robust metacognition also needs to be context-aware and use diversity in its reasoning and analysis methods. Both these requirements mean that an agent evaluates its reasoning within a “bigger picture” and that it can monitor this global picture from multi- ple perspectives. In particular, social context-awareness involves understanding the goals and concerns of users and organisations. In this paper, we first present a conceptual architecture for distributed metacognition with context-awareness and diversity. We then consider the challenges of apply- ing this architecture to autonomic management systems in scenarios where agents must collectively diagnose and respond to errors and intrusions. Such autonomic systems need rich semantic knowledge and diverse data sources in order to provide the necessary context for their metacognitive evaluations and decisions.


Abduction and Argumentation for Explainable Machine Learning: A Position Survey

arXiv.org Artificial Intelligence

This paper presents Abduction and Argumentation as two principled forms for reasoning, and fleshes out the fundamental role that they can play within Machine Learning. It reviews the state-of-the-art work over the past few decades on the link of these two reasoning forms with machine learning work, and from this it elaborates on how the explanation-generating role of Abduction and Argumentation makes them naturally-fitting mechanisms for the development of Explainable Machine Learning and AI systems. Abduction contributes towards this goal by facilitating learning through the transformation, preparation, and homogenization of data. Argumentation, as a conservative extension of classical deductive reasoning, offers a flexible prediction and coverage mechanism for learning -- an associated target language for learned knowledge -- that explicitly acknowledges the need to deal, in the context of learning, with uncertain, incomplete and inconsistent data that are incompatible with any classically-represented logical theory.


Constructing and Revising Commonsense Science Explanations: A Metareasoning Approach

AAAI Conferences

Reasoning with commonsense science knowledge is an important challenge for Artificial Intelligence. This paper presents a system that revises its knowledge in a commonsense science domain by constructing and evaluating explanations. Domain knowledge is represented using qualitative model fragments, which are used to explain phenomena via model formulation. Metareasoning is used to (1) score competing explanations numerically along several dimensions and (2) evaluate preferred explanations for global consistency. Inconsistencies cause the system to favor alternative explanations and thereby change its beliefs. We simulate the belief changes of several students during clinical interviews about how the seasons change. We show that qualitative models accurately represent student knowledge and that our system produces and revises a sequence of explanations similar those of the students.