Schröder, Lutz (DFKI GmbH, Bremen) | Pattinson, Dirk (Imperial College London)

Uncertainty and vagueness are pervasive phenomena in real-life knowledge. They are supported in extended description logics that adapt classical description logics to deal with numerical probabilities or fuzzy truth degrees. While the two concepts are distinguished for good reasons, they combine in the notion of probably, which is ultimately a fuzzy qualification of probabilities. Here, we develop existing propositional logics of fuzzy probability into a full-blown description logic, and we show decidability of several variants of this logic under Lukasiewicz semantics. We obtain these results in a novel generic framework of fuzzy coalgebraic logic; this enables us to extend our results to logics that combine crisp ingredients including standard crisp roles and crisp numerical probabilities with fuzzy roles and fuzzy probabilities.

First-Order Logic (FOL) is widely regarded as one of the most important foundations for knowledge representation. Nevertheless, in this paper, we argue that FOL has several critical issues for this purpose. Instead, we propose an alternative called assertional logic, in which all syntactic objects are categorized as set theoretic constructs including individuals, concepts and operators, and all kinds of knowledge are formalized by equality assertions. We first present a primitive form of assertional logic that uses minimal assumed knowledge and constructs. Then, we show how to extend it by definitions, which are special kinds of knowledge, i.e., assertions. We argue that assertional logic, although simpler, is more expressive and extensible than FOL. As a case study, we show how assertional logic can be used to unify logic and probability, and more building blocks in AI.

Riguzzi, Fabrizio (University of Ferrara) | Bellodi, Elena (University of Ferrara) | Lamma, Evelina (University of Ferrara) | Zese, Riccardo (University of Ferrara)

Modeling real world domains requires ever more frequently to represent uncertain information. The DISPONTE semantics for probabilistic description logics allows to annotate axioms of a knowledge base with a value that represents their probability. In this paper we discuss approaches for performing inference from probabilistic ontologies following the DISPONTE semantics. We present the algorithm BUNDLE for computing the probability of queries. BUNDLE exploits an underlying Description Logic reasoner, such as Pellet, in order to find explanations for a query. These are then encoded in a Binary Decision Diagram that is used for computing the probability of the query.

Hüllermeier, Eyke, Waegeman, Willem

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular. 1 Introduction Machine learning is essentially concerned with extracting models from data and using these models to make predictions.

Zennaro, Fabio Massimo, Jøsang, Audun

The multi-armed bandit problem is a classical decision-making problem where an agent has to learn an optimal action balancing exploration and exploitation. Properly managing this trade-off requires a correct assessment of uncertainty; in multi-armed bandits, as in other machine learning applications, it is important to distinguish between stochasticity that is inherent to the system (aleatoric uncertainty) and stochasticity that derives from the limited knowledge of the agent (epistemic uncertainty). In this paper we consider the formalism of subjective logic, a concise and expressive framework to express Dirichlet-multinomial models as subjective opinions, and we apply it to the problem of multi-armed bandits. We propose new algorithms grounded in subjective logic to tackle the multi-armed bandit problem, we compare them against classical algorithms from the literature, and we analyze the insights they provide in evaluating the dynamics of uncertainty. Our preliminary results suggest that subjective logic quantities enable useful assessment of uncertainty that may be exploited by more refined agents.