Goto

Collaborating Authors

Plausible Reasoning about EL-Ontologies using Concept Interpolation

arXiv.org Artificial Intelligence

Description logics (DLs) are standard knowledge representation languages for modelling ontologies, i.e. knowledge about concepts and the relations between them. Unfortunately, DL ontologies are difficult to learn from data and time-consuming to encode manually. As a result, ontologies for broad domains are almost inevitably incomplete. In recent years, several data-driven approaches have been proposed for automatically extending such ontologies. One family of methods rely on characterizations of concepts that are derived from text descriptions. While such characterizations do not capture ontological knowledge directly, they encode information about the similarity between different concepts, which can be exploited for filling in the gaps in existing ontologies. To this end, several inductive inference mechanisms have already been proposed, but these have been defined and used in a heuristic fashion. In this paper, we instead propose an inductive inference mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning. We particularly focus on interpolation, a powerful commonsense reasoning mechanism which is closely related to cognitive models of category-based induction. Apart from the formalization of the underlying semantics, as our main technical contribution we provide computational complexity bounds for reasoning in EL with this interpolation mechanism.


Inductive Reasoning about Ontologies Using Conceptual Spaces

AAAI Conferences

Structured knowledge about concepts plays an increasingly important role in areas such as information retrieval. The available ontologies and knowledge graphs that encode such conceptual knowledge, however, are inevitably incomplete. This observation has led to a number of methods that aim to automatically complete existing knowledge bases. Unfortunately, most existing approaches rely on black box models, e.g. formulated as global optimization problems, which makes it difficult to support the underlying reasoning process with intuitive explanations. In this paper, we propose a new method for knowledge base completion, which uses interpretable conceptual space representations and an explicit model for inductive inference that is closer to human forms of commonsense reasoning. Moreover, by separating the task of representation learning from inductive reasoning, our method is easier to apply in a wider variety of contexts. Finally, unlike optimization based approaches, our method can naturally be applied in settings where various logical constraints between the extensions of concepts need to be taken into account.


Analogical Proportions

arXiv.org Artificial Intelligence

Analogy-making is at the core of human intelligence and creativity with applications to such diverse tasks as commonsense reasoning, learning, language acquisition, and story telling. This paper contributes to the foundations of artificial general intelligence by introducing an abstract algebraic framework of analogical proportions of the form `$a$ is to $b$ what $c$ is to $d$' in the general setting of universal algebra. This enables us to compare mathematical objects possibly across different domains in a uniform way which is crucial for AI-systems. The main idea is to define solutions to analogical equations in terms of generalizations and to derive abstract terms of concrete elements from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous elements. We extensively compare our framework with two prominent and recently introduced frameworks of analogical proportions from the literature in the concrete domains of sets, numbers, and words and show that our framework yields strictly more reasonable solutions in all of these cases which provides evidence for the applicability of our framework. In a broader sense, this paper is a first step towards an algebraic theory of analogical reasoning and learning systems with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity.


Homogeneous Logical Proportions: Their Uniqueness and Their Role in Similarity-Based Prediction

AAAI Conferences

Given a 4-tuple of Boolean variables (a, b, c, d), logical proportions are modeled by a pair of equivalences relating similarity indicators (a ∧ b and a ∧ b), or dissimilarity indicators (a ∧ b and a ∧ b) pertaining to the pair (a, b), to the ones associated with the pair (c, d). Logical proportions are homogeneous when they are based on equivalences between indicators of the same kind. There are only 4 such homogeneous proportions, which respectively express that i) “a differs from b as c differs from d” (and “b differs from a as d differs from c”), ii) “a differs from b as d differs from c” (and “b differs from a as c differs from d”), iii) “what a and b have in common c and d have it also”, iv) “what a and b have in common neither c nor d have it”. We prove that each of these proportions is the unique Boolean formula (up to equivalence) that satisfies groups of remarkable properties including a stability property w.r.t. a specific permutation of the terms of the proportion. The first one (i) is shown to be the only one to satisfy the standard postulates of an analogical proportion. The paper also studies how two analogical proportions can be combined into a new one. We then examine how homogeneous proportions can be used for diverse prediction tasks. We particularly focus on the completion of analogical-like series, and on missing value abduction problems. Finally, the paper compares our approach with other existing works on qualitative prediction based on ideas of betweenness, or of matrix abduction.


Probabilistic Analogical Mapping with Semantic Relation Networks

arXiv.org Artificial Intelligence

These subprocesses are interrelated, with mapping considered to be the pivotal process (Gentner, 1983). Mapping may play a role in retrieval, as mapping a target analog to multiple potential source analogs stored in memory can help identify one or more that seems promising; and the correspondences computed by mapping support subsequent inference and schema induction. Thus, because of its centrality to analogical reasoning, the present paper focuses on the process of mapping between two analogs. We also consider the possible role that mapping may play in analog retrieval. Computational Approaches to Analogy Computational models of analogy have been developed in both artificial intelligence (AI) and cognitive science over more than half a century (for a recent review and critical analysis, see Mitchell, 2021). These models differ in many ways, both in terms of basic assumptions about the constraints that define a "good" analogy for humans, and in the detailed algorithms that accomplish analogical reasoning. For our present purposes, two broad approaches can be distinguished. The first approach, which can be termed representation matching, combines mental representations of structured knowledge about each analog with a matching process that computes some form of relational similarity, yielding a set of correspondences between the elements of the two analogs. The structured knowledge about an analog is typically assumed to approximate the content of propositions expressed in predicate calculus; e.g., the instantiated relation "hammer hits nail" might be coded as hit (hammer, nail).