Goto

Collaborating Authors

 rdenfor


Analogical Reasoning Within a Conceptual Hyperspace

Goldowsky, Howard, Sarathy, Vasanth

arXiv.org Artificial Intelligence

We propose an approach to analogical inference that marries the neuro-symbolic computational power of complex-sampled hyperdimensional computing (HDC) with Conceptual Spaces Theory (CST), a promising theory of semantic meaning. CST sketches, at an abstract level, approaches to analogical inference that go beyond the standard predicate-based structure mapping theories. But it does not describe how such an approach can be operationalized. We propose a concrete HDC-based architecture that computes several types of analogy classified by CST. We present preliminary proof-of-concept experimental results within a toy domain and describe how it can perform category-based and property-based analogical reasoning.


On convex decision regions in deep network representations

Tětková, Lenka, Brüsch, Thea, Scheidt, Teresa Karen, Mager, Fabian Martin, Aagaard, Rasmus Ørtoft, Foldager, Jonathan, Alstrøm, Tommy Sonne, Hansen, Lars Kai

arXiv.org Artificial Intelligence

Current work on human-machine alignment aims at understanding machine-learned latent spaces and their correspondence to human representations. G{\"a}rdenfors' conceptual spaces is a prominent framework for understanding human representations. Convexity of object regions in conceptual spaces is argued to promote generalizability, few-shot learning, and interpersonal alignment. Based on these insights, we investigate the notion of convexity of concept regions in machine-learned latent spaces. We develop a set of tools for measuring convexity in sampled data and evaluate emergent convexity in layered representations of state-of-the-art deep networks. We show that convexity is robust to basic re-parametrization and, hence, meaningful as a quality of machine-learned latent spaces. We find that approximate convexity is pervasive in neural representations in multiple application domains, including models of images, audio, human activity, text, and medical images. Generally, we observe that fine-tuning increases the convexity of label regions. We find evidence that pretraining convexity of class label regions predicts subsequent fine-tuning performance.


Oveisi

AAAI Conferences

A strong intuition for AGM belief change operations, Gärdenfors suggests, is that formulas that are independent of a change should remain intact. Based on this intuition, Fariñas and Herzig axiomatize a dependence relation w.r.t. a belief set, and formalize the connection between dependence and belief change. In this paper, we introduce base dependence as a relation between formulas w.r.t. a belief base. After an axiomatization of base dependence, we formalize the connection between base dependence and a particular belief base change operation, saturated kernel contraction. Moreover, we prove that base dependence is a reversible generalization of Fariñas and Herzig's dependence. That is, in the special case when the underlying belief base is deductively closed (i.e., it is a belief set), base dependence reduces to dependence. Finally, an intriguing feature of Fariñas and Herzig's formalism is that it meets other criteria for dependence, namely, Keynes' conjunction criterion for dependence (CCD) and Gärdenfors' conjunction criterion for independence (CCI). We show that our base dependence formalism also meets these criteria. More interestingly, we offer a more specific criterion that implies both CCD and CCI, and show our base dependence formalism also meets this new criterion.


A Categorical Semantics of Fuzzy Concepts in Conceptual Spaces

Tull, Sean

arXiv.org Artificial Intelligence

We define a symmetric monoidal category modelling fuzzy concepts and fuzzy conceptual reasoning within G\"ardenfors' framework of conceptual (convex) spaces. We propose log-concave functions as models of fuzzy concepts, showing that these are the most general choice satisfying a criterion due to G\"ardenfors and which are well-behaved compositionally. We then generalise these to define the category of log-concave probabilistic channels between convex spaces, which allows one to model fuzzy reasoning with noisy inputs, and provides a novel example of a Markov category.


Formalized Conceptual Spaces with a Geometric Representation of Correlations

Bechberger, Lucas, Kühnberger, Kai-Uwe

arXiv.org Artificial Intelligence

The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a similarity space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define various operations for our formalization, both for creating new concepts from old ones and for measuring relations between concepts. We present an illustrative toy-example and sketch a research project on concept formation that is based on both our formalization and its implementation.


New Evidence for the Geometry of Thought - Facts So Romantic

Nautilus

In 2014, the Swedish philosopher and cognitive scientist Peter Gärdenfors went to Krakow, Poland, for a conference on the mind. He was to lecture at Jagiellonian University, courtesy of the Copernicus Center for Interdisciplinary Studies, on his theory of conceptual, or "cognitive," spaces. Gärdenfors had been working on his idea of cognitive spaces, which explain how our brains represent concepts and objects, for decades. In his book Conceptual Spaces, from 2000, he wrote, "It has long been a common prejudice in cognitive science that the brain is either a Turing machine working with symbols or a connectionist system using neural networks." In Krakow, Gärdenfors pushed against that prejudice. In his talk, "The Geometry of Thinking," he suggested that humans are able to do things that today's powerful computers can't do--like learn language quickly and generalize from particulars with ease (to see, in other words, without much training, that lions and tigers are four-legged felines)--because we, unlike our computers, represent information in geometrical space.


Data-driven Conceptual Spaces: Creating Semantic Representations For Linguistic Descriptions Of Numerical Data

Banaee, Hadi, Schaffernicht, Erik, Loutfi, Amy

Journal of Artificial Intelligence Research

There is an increasing need to derive semantics from real-world observations to facilitate natural information sharing between machine and human. Conceptual spaces theory is a possible approach and has been proposed as mid-level representation between symbolic and sub-symbolic representations, whereby concepts are represented in a geometrical space that is characterised by a number of quality dimensions. Currently, much of the work has demonstrated how conceptual spaces are created in a knowledge-driven manner, relying on prior knowledge to form concepts and identify quality dimensions. This paper presents a method to create semantic representations using data-driven conceptual spaces which are then used to derive linguistic descriptions of numerical data. Our contribution is a principled approach to automatically construct a conceptual space from a set of known observations wherein the quality dimensions and domains are not known a priori. This novelty of the approach is the ability to select and group semantic features to discriminate between concepts in a data-driven manner while preserving the semantic interpretation that is needed to infer linguistic descriptions for interaction with humans. Two data sets representing leaf images and time series signals are used to evaluate the method. An empirical evaluation for each case study assesses how well linguistic descriptions generated from the conceptual spaces identify unknown observations. Furthermore, comparisons are made with descriptions derived on alternative approaches for generating semantic models.


Formal Ways for Measuring Relations between Concepts in Conceptual Spaces

Bechberger, Lucas, Kühnberger, Kai-Uwe

arXiv.org Artificial Intelligence

The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a high-dimensional space and concepts are represented by regions in this space. In this article, we extend our recent mathematical formalization of this framework by providing quantitative mathematical definitions for measuring relations between concepts: We develop formal ways for computing concept size, subsethood, implication, similarity, and betweenness. This considerably increases the representational capabilities of our formalization and makes it the most thorough and comprehensive formalization of conceptual spaces developed so far.


Kernel Contraction and Base Dependence

Oveisi, Mehrdad, Delgrande, James P., Pelletier, Francis Jeffry, Popowich, Fred

Journal of Artificial Intelligence Research

The AGM paradigm of belief change studies the dynamics of belief states in light of new information. Finding, or even approximating, those beliefs that are dependent on or relevant to a change is valuable because, for example, it can narrow the set of beliefs considered during belief change operations. A strong intuition in this area is captured by Gärdenforss preservation criterion (GPC), which suggests that formulas independent of a belief change should remain intact. GPC thus allows one to build dependence relations that are linked with belief change. Such dependence relations can in turn be used as a theoretical benchmark against which to evaluate other approximate dependence or relevance relations. Fariñas and Herzig axiomatize a dependence relation with respect to a belief set, and, based on GPC, they characterize the correspondence between AGM contraction functions and dependence relations. In this paper, we introduce base dependence as a relation between formulas with respect to a belief base, and prove a more general characterization that shows the correspondence between kernel contraction and base dependence. At this level of generalization, different types of base dependence emerge, which we show to be a result of possible redundancy in the belief base. We further show that one of these relations that emerge, strong base dependence, is parallel to saturated kernel contraction. We then prove that our latter characterization is a reversible generalization of Fariñas and Herzigs characterization. That is, in the special case when the underlying belief base is deductively closed (i.e., it is a belief set), strong base dependence reduces to dependence, and so do their respective characterizations. Finally, an intriguing feature of Fariñas and Herzigs formalism is that it meets other criteria for dependence, namely, Keyness conjunction criterion for dependence (CCD) and Gärdenforss conjunction criterion for independence (CCI). We prove that our base dependence formalism also meets these criteria. Even more interestingly, we offer a more specific criterion that implies both CCD and CCI, and show our base dependence formalism also meets this new criterion.


The Logic of Qualitative Probability

Delgrande, James (Simon Fraser University) | Renne, Bryan (University of Amsterdam)

AAAI Conferences

In this paper we present a theory of qualitative probability. Work in the area goes back at least to de Finetti. The usual approach is to specify a binary operator ≼ with φ ≼ ψ having the intended interpretation that φ is not more probable than ψ . We generalise these approaches by extending the domain of the operator ≼ from the set of events to the set of finite sequences of events. If Φ and Ψ are finite sequences of events, Φ ≼ Ψ has the intended interpretation that the summed probabilities of the elements of Φ is not greater than the sum of those of Ψ . We provide a sound and complete axiomatisation for this operator over finite outcome sets, and show that this theory is sufficiently powerful to capture the results of axiomatic probability theory. We argue that our approach is simpler and more perspicuous than previous accounts. As well, we prove that our approach generalises the two major accounts for finite outcome sets.