If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The learning protocol follows Angluin's exact learning model, where an ontology engineer tries to identify an ontology by interacting with a domain expert by asking queries. We implement the learning process as a question-answer game between two components of our system, the learner and the teacher. We evaluate ExactLearner's performance on EL ontologies from the Oxford ontology repository and demonstrate that despite the algorithm being exponential, it successfully terminates for small and medium size ontologies. We investigate the impact of various learner and teacher features and identify those most useful for learning.
Koutras, Costas D. (American University of the Middle East) | Liaskos, Konstantinos (American University of the Middle East) | Moyzes, Christos (University of Liverpool) | Rantsoudis, Christos (Institut de Recherche en Informatique de Toulouse)
A default consequence relation α|~β (if α, then normally β) can be naturally interpreted via a `most' generalized quantifier: α|~β is valid iff in `most' α-worlds, β is also true. We define various semantic incarnations of this principle which attempt to make the set of (α ∧ β)-worlds `large' and the set of (α ∧ ¬ β)-worlds `small'. The straightforward implementation of this idea on finite sets is via `clear majority'. We proceed to examine different `majority' interpretations of normality which are defined upon notions of classical mathematics which formalize aspects of `size'. We define default consequence using the notion of asymptotic density from analytic number theory. Asymptotic density provides a way to measure the size of integer sequences in a way much more fine-grained and accurate than set cardinality. Further on, in a topological setting, we identify `large' sets with dense sets and `negligibly small' sets with nowhere dense sets. Finally, we define default consequence via the concept of measure, classically developed in mathematical analysis for capturing `size' through a generalization of the notions of length, area and volume. The logics defined via asymptotic density and measure are weaker than the KLM system P, the so-called `conservative core' of nonmonotonic reasoning, and they resemble to probabilistic consequence. Topology goes a longer way towards system P but it misses Cautious Monotony (CM) and AND. Our results show that a `size'-oriented interpretation of default reasoning is context-sensitive and in `most' cases it departs from the preferential approach.
The uncertainty induced by unknown attacker locations is one of the problems in deploying AI methods to security domains. We study a model with partial observability of the attacker location and propose a novel reinforcement learning method using partial information about attacker behaviour coming from the system. This method is based on deriving beliefs about underlying states using Bayesian inference. These beliefs are then used in the QMDP algorithm. We particularly design the algorithm for spatial security games, where the defender faces intelligent and adversarial opponents.
Although it might be too early for a standard model of the mind (SMM), comparison between current cognitive architectures is a useful exercise. This article highlights some of the likely difficulties facing the development of a SMM – both empirical and theoretical. In particular, it follows Newell (1990) by arguing that a viable model of the mind must be constructed taking advantage of experimental constraints, based on comparisons of the model with (human or animal) data. We then describe our proposed methodology for ensuring a tight link between psychological data and a cognitive architecture. We also discuss CHREST, a cognitive model with a particular emphasis on modelling psychological results. CHREST has been applied in several domains, such as language acquisition and expertise. The article concludes by highlighting some of the features that distinguish CHREST from architectures such as Soar and ACT-R. Some of these differences are significant, creating challenges for a SMM.
Atkinson, Katie (University of Liverpool) | Baroni, Pietro (Università degli Studi di Brescia) | Giacomin, Massimiliano (Università degli Studi di Brescia) | Hunter, Anthony (University College London) | Prakken, Henry (Utrecht University) | Reed, Chris (University of Dundee) | Simari, Guillermo (Universidad Nacional del Sur) | Thimm, Matthias (Universität Koblenz-Landau) | Villata, Serena (Université Côte d'Azur)
Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed GREEDYMATCHING. In wide contrast to the maximum weight matching problem, for which many efficient algorithms are known, we prove that GREEDYMATCHING is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most 3 and with at most three different integer edge weights. Furthermore we prove that GREEDYMATCHING is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider three natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and computational tractability. On the positive side, we present a randomized approximation algorithm (RGMA) for GREEDYMATCHING on a special class of weighted graphs, called bushgraphs. We highlight an unexpected connection between RGMA and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of RGMA is ρ, then for every ε > 0 the randomized MRG algorithm of (Aronson et al. 1995) gives a (ρ − ε)-approximation for the maximum cardinality matching. We conjecture that a tightbound for ρ is 2/3; we prove our conjecture true for four subclasses of bush graphs. Proving a tight bound for the approximation ratio of MRG on unweighted graphs (and thus also proving a tight value for ρ) is a long-standing open problem (Poloczek and Szegedy 2012). This unexpected relation of our RGMA algorithm with the MRG algorithm may provide new insights for solving this problem.
Cao, Zhiguang (Nanyang Technological University) | Guo, Hongliang (University of Electronic Science and Technology of China) | Zhang, Jie (Nanyang Technological University) | Oliehoek, Frans (University of Liverpool) | Fastenrath, Ulrich (BMW Group)
The stochastic shortest path problem is of crucial importance for the development of sustainable transportation systems. Existing methods based on the probability tail model seek for the path that maximizes the probability of arriving at the destination before a deadline. However, they suffer from low accuracy and/or high computational cost. We design a novel Q-learning method where the converged Q-values have the practical meaning as the actual probabilities of arriving on time so as to improve accuracy. By further adopting dynamic neural networks to learn the value function, our method can scale well to large road networks with arbitrary deadlines. Experimental results on real road networks demonstrate the significant advantages of our method over other counterparts.
We study the problem of checking for the existence of constrained pure Nash equilibria in a subclass of polymatrix games defined on weighted directed graphs. The payoff of a player is defined as the sum of nonnegative rational weights on incoming edges from players who picked the same strategy augmented by a fixed integer bonus for picking a given strategy. These games capture the idea of coordination within a local neighbourhood in the absence of globally common strategies. We study the decision problem of checking whether a given set of strategy choices for a subset of the players is consistent with some pure Nash equilibrium or, alternatively, with all pure Nash equilibria. We identify the most natural tractable cases and show NP or coNP-completness of these problems already for unweighted DAGs.
Adding datatypes to ontology-mediated queries (OMQs) often makes query answering hard. As a consequence, the use of datatypes in OWL 2 QL has been severely restricted. In this paper we propose a new, non-uniform, way of analyzing the data-complexity of OMQ answering with datatypes. Instead of restricting the ontology language we aim at a classification of the patterns of datatype atoms in OMQs into those that can occur in non-tractable OMQs and those that only occur in tractable OMQs. To this end we establish a close link between OMQ answering with datatypes and constraint satisfaction problems over the datatypes. In a case study we apply this link to prove a P/coNP-dichotomy for OMQs over DL-Lite extended with the datatype (Q,<=). The proof employs a recent dichotomy result by Bodirsky and Kára for temporal constraint satisfaction problems.
This paper is aimed as a contribution to the use of formal modal languages in Artificial Intelligence. We introduce a multi-modal version of Second-order Propositional Modal Logic (SOPML), an extension of modal logic with propositional quantification, and illustrate its usefulness as a specification language for knowledge representation as well as temporal and spatial reasoning. Then, we define novel notions of (bi)simulation and prove that these preserve the interpretation of SOPML formulas. Finally, we apply these results to assess the expressive power of SOPML.