Goto

Collaborating Authors

Inference and Learning for Probabilistic Description Logics

AAAI Conferences

The last years have seen an exponential increase in the interest for the development of methods for combining probability with Description Logics (DLs). These methods are very useful to model real world domains, where incompleteness and uncertainty are common. This combination has become a fundamental component of the Semantic Web.Our work started with the development of a probabilistic semantics for DL, called DISPONTE, that applies the distribution semantics to DLs. Under DISPONTE we annotate axioms of a theory with a probability, that can be interpreted as the degree of our belief in the corresponding axiom, and we assume that each axiom is independent of the others. Several algorithms have been proposed for supporting the development of the Semantic Web. Efficient DL reasoners, such us Pellet, are able to extract implicit information from the modeled ontologies. Despite the availability of many DL reasoners, the number of probabilistic reasoners is quite small. We developed BUNDLE, a reasoner based on Pellet that allows to compute the probability of queries. BUNDLE, like most DL reasoners, exploits an imperative language for implementing its reasoning algorithm. Nonetheless, usually reasoning algorithms use non-deterministic operators for doing inference. One of the most used approaches for doing reasoning is the tableau algorithm which applies a set of consistency preserving expansion rules to an ABox, but some of these rules are non-deterministic.In order to manage this non-determinism, we developed the system TRILL which performs inference over DISPONTE DLs. It implements the tableau algorithm in the declarative Prolog language, whose search strategy is exploited for taking into account the non-determinism of the reasoning process. Moreover, we developed a second version of TRILL, called TRILL^P, which implements some optimizations for reducing the running time. The parameters of probabilistic KBs are difficult to set. It is thus necessary to develop systems which automatically learn this parameters starting from the information available in the KB. We presented EDGE that learns the parameters of a DISPONTE KB, and LEAP, that learn the structure together with the parameters of a DISPONTE KB. The main objective is to apply the developed algorithms to Big Data. Nonetheless, the size of the data requires the implementation of algorithms able to handle it. It is thus necessary to exploit approaches based on the parallelization and on cloud computing. Nowadays, we are working to improve EDGE and LEAP in order to parallelize them.


Zese

AAAI Conferences

The last years have seen an exponential increase in the interest for the development of methods for combining probability with Description Logics (DLs). These methods are very useful to model real world domains, where incompleteness and uncertainty are common. This combination has become a fundamental component of the Semantic Web.Our work started with the development of a probabilistic semantics for DL, called DISPONTE, that applies the distribution semantics to DLs. Under DISPONTE we annotate axioms of a theory with a probability, that can be interpreted as the degree of our belief in the corresponding axiom, and we assume that each axiom is independent of the others. Several algorithms have been proposed for supporting the development of the Semantic Web. Efficient DL reasoners, such us Pellet, are able to extract implicit information from the modeled ontologies. Despite the availability of many DL reasoners, the number of probabilistic reasoners is quite small. We developed BUNDLE, a reasoner based on Pellet that allows to compute the probability of queries.


Reasoning with Probabilistic Logics

arXiv.org Artificial Intelligence

The interest in the combination of probability with logics for modeling the world has rapidly increased in the last few years. One of the most effective approaches is the Distribution Semantics which was adopted by many logic programming languages and in Descripion Logics. In this paper, we illustrate the work we have done in this research field by presenting a probabilistic semantics for description logics and reasoning and learning algorithms. In particular, we present in detail the system TRILL P, which computes the probability of queries w.r.t. probabilistic knowledge bases, which has been implemented in Prolog. Note: An extended abstract / full version of a paper accepted to be presented at the Doctoral Consortium of the 30th International Conference on Logic Programming (ICLP 2014), July 19-22, Vienna, Austria


Probabilistic DL Reasoning with Pinpointing Formulas: A Prolog-based Approach

arXiv.org Artificial Intelligence

When modeling real world domains we have to deal with information that is incomplete or that comes from sources with different trust levels. This motivates the need for managing uncertainty in the Semantic Web. To this purpose, we introduced a probabilistic semantics, named DISPONTE, in order to combine description logics with probability theory. The probability of a query can be then computed from the set of its explanations by building a Binary Decision Diagram (BDD). The set of explanations can be found using the tableau algorithm, which has to handle non-determinism. Prolog, with its efficient handling of non-determinism, is suitable for implementing the tableau algorithm. TRILL and TRILLP are systems offering a Prolog implementation of the tableau algorithm. TRILLP builds a pinpointing formula, that compactly represents the set of explanations and can be directly translated into a BDD. Both reasoners were shown to outperform state-of-the-art DL reasoners. In this paper, we present an improvement of TRILLP, named TORNADO, in which the BDD is directly built during the construction of the tableau, further speeding up the overall inference process. An experimental comparison shows the effectiveness of TORNADO. All systems can be tried online in the TRILL on SWISH web application at http://trill.ml.unife.it/.


A Description Logic Framework for Commonsense Conceptual Combination Integrating Typicality, Probabilities and Cognitive Heuristics

arXiv.org Artificial Intelligence

We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of concept combination of prototypical concepts. The proposed logic relies on the logic of typicality ALC TR, whose semantics is based on the notion of rational closure, as well as on the distributed semantics of probabilistic Description Logics, and is equipped with a cognitive heuristic used by humans for concept composition. We first extend the logic of typicality ALC TR by typicality inclusions whose intuitive meaning is that "there is probability p about the fact that typical Cs are Ds". As in the distributed semantics, we define different scenarios containing only some typicality inclusions, each one having a suitable probability. We then focus on those scenarios whose probabilities belong to a given and fixed range, and we exploit such scenarios in order to ascribe typical properties to a concept C obtained as the combination of two prototypical concepts. We also show that reasoning in the proposed Description Logic is EXPTIME-complete as for the underlying ALC.