Goto

Collaborating Authors

Abstraction and Relational learning

Neural Information Processing Systems

Most models of categorization learn categories defined by characteristic features but some categories are described more naturally in terms of relations. We present a generative model that helps to explain how relational categories are learned and used. Our model learns abstract schemata that specify the relational similarities shared by instances of a category, and our emphasis on abstraction departs from previous theoretical proposals that focus instead on comparison of concrete instances. Ourfirst experiment suggests that abstraction can help to explain some of the findings that have previously been used to support comparison-based approaches. Oursecond experiment focuses on one-shot schema learning, a problem that raises challenges for comparison-based approaches but is handled naturally by our abstraction-based account.


Data-driven Conceptual Spaces: Creating Semantic Representations For Linguistic Descriptions Of Numerical Data

Journal of Artificial Intelligence Research

There is an increasing need to derive semantics from real-world observations to facilitate natural information sharing between machine and human. Conceptual spaces theory is a possible approach and has been proposed as mid-level representation between symbolic and sub-symbolic representations, whereby concepts are represented in a geometrical space that is characterised by a number of quality dimensions. Currently, much of the work has demonstrated how conceptual spaces are created in a knowledge-driven manner, relying on prior knowledge to form concepts and identify quality dimensions. This paper presents a method to create semantic representations using data-driven conceptual spaces which are then used to derive linguistic descriptions of numerical data. Our contribution is a principled approach to automatically construct a conceptual space from a set of known observations wherein the quality dimensions and domains are not known a priori. This novelty of the approach is the ability to select and group semantic features to discriminate between concepts in a data-driven manner while preserving the semantic interpretation that is needed to infer linguistic descriptions for interaction with humans. Two data sets representing leaf images and time series signals are used to evaluate the method. An empirical evaluation for each case study assesses how well linguistic descriptions generated from the conceptual spaces identify unknown observations. Furthermore, comparisons are made with descriptions derived on alternative approaches for generating semantic models.


Is it a Fruit, an Apple or a Granny Smith? Predicting the Basic Level in a Concept Hierarchy

arXiv.org Artificial Intelligence

Is it a Fruit, an Apple or a Granny Smith? Abstract The "basic level", according to experiments in cognitive psychology, is the level of abstraction in a hierarchy of concepts at which humans perform tasks quicker and with greater accuracy than at other levels. We argue that applications that use concept hierarchies - such as knowledge graphs, ontologies or taxonomies - could significantly improve their user interfaces if they'knew' which concepts are the basic level concepts. This paper examines to what extent the basic level can be learned from data. We test the utility of three types of concept features, that were inspired by the basic level theory: lexical features, structural features and frequency features. We evaluate our approach on WordNet, and create a training set of manually labelled examples that includes concepts from different domains. Our findings include that the basic level concepts can be accurately identified within one domain. Concepts that are difficult to label for humans are also harder to classify automatically. Our experiments provide insight into how classification performance across domains could be improved, which is necessary for identification of basic level concepts on a larger scale. 1 Introduction One of the ongoing challenges in Artificial Intelligence is to explicitly describe the world in ways that machines can process. This has resulted in taxonomies, thesauri, ontologies and more recently knowledge graphs. While these various knowledge organization systems (KOSs) may use different formal languages, they all share similar underlying data representations.


An Ontology-based Approach to Explaining Artificial Neural Networks

arXiv.org Artificial Intelligence

Explainability in Artificial Intelligence has been revived as a topic of active research by the need of conveying safety and trust to users in the `how' and `why' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how this influences the understandability of an explanation from the users' perspective. In this paper we show how ontologies help the understandability of interpretable machine learning models, such as decision trees. In particular, we build on Trepan, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to include ontologies modeling domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees in domains where explanations are critical, namely, in finance and medicine. Our study shows that decision trees taking into account domain knowledge during generation are more understandable than those generated without the use of ontologies.


Inductive Policy Selection for First-Order MDPs

arXiv.org Artificial Intelligence

We select policies for large Markov Decision Processes (MDPs) with compact first-order representations. We find policies that generalize well as the number of objects in the domain grows, potentially without bound. Existing dynamic-programming approaches based on flat, propositional, or first-order representations either are impractical here or do not naturally scale as the number of objects grows without bound. We implement and evaluate an alternative approach that induces first-order policies using training data constructed by solving small problem instances using PGraphplan (Blum & Langford, 1999). Our policies are represented as ensembles of decision lists, using a taxonomic concept language. This approach extends the work of Martin and Geffner (2000) to stochastic domains, ensemble learning, and a wider variety of problems. Empirically, we find "good" policies for several stochastic first-order MDPs that are beyond the scope of previous approaches. We also discuss the application of this work to the relational reinforcement-learning problem.