Goto

Collaborating Authors

 Ontologies


Interactive ontology debugging: two query strategies for efficient fault localization

arXiv.org Artificial Intelligence

Effective debugging of ontologies is an important prerequisite for their broad application, especially in areas that rely on everyday users to create and maintain knowledge bases, such as the Semantic Web. In such systems ontologies capture formalized vocabularies of terms shared by its users. However in many cases users have different local views of the domain, i.e. of the context in which a given term is used. Inappropriate usage of terms together with natural complications when formulating and understanding logical descriptions may result in faulty ontologies. Recent ontology debugging approaches use diagnosis methods to identify causes of the faults. In most debugging scenarios these methods return many alternative diagnoses, thus placing the burden of fault localization on the user. This paper demonstrates how the target diagnosis can be identified by performing a sequence of observations, that is, by querying an oracle about entailments of the target ontology. To identify the best query we propose two query selection strategies: a simple "split-in-half" strategy and an entropy-based strategy. The latter allows knowledge about typical user errors to be exploited to minimize the number of queries. Our evaluation showed that the entropy-based method significantly reduces the number of required queries compared to the "split-in-half" approach. We experimented with different probability distributions of user errors and different qualities of the a-priori probabilities. Our measurements demonstrated the superiority of entropy-based query selection even in cases where all fault probabilities are equal, i.e. where no information about typical user errors is available.


Natural Language Access to Enterprise Data

AI Magazine

This paper describes USI Answers โ€” a natural language question answering system for enterprise data. We report on the progress towards the goal of offering easy access to enterprise data to a large number of business users, most of whom are not familiar with the specific syntax or semantics of the underlying data sources. Additional complications come from the nature of the data, which comes both as structured and unstructured. The proposed solution allows users to express questions in natural language, makes apparent the system's interpretation of the query, and allows easy query adjustment and reformulation. The application is in use by more than 1500 users from Siemens Energy. We evaluate our approach on a data set consisting of fleet data.


A Mining Method to Create Knowledge Map by Analysing the Data Resource

arXiv.org Artificial Intelligence

The fundamental step in measuring the robustness of a system is the synthesis of the so called Process Map.This is generally based on the user raw data material.Process Maps are of fundamental importance towards the understanding of the nature of a system in that they indicate which variables are causally related and which are particularly important.This paper represent the system Map or business structure map to understand business criteria studying the various aspects of the company.The business structure map or knowledge map or Process map are used to increase the growth of the company by giving some useful measures according to the business criteria.This paper also deals with the different company strategy to reduce the risk factors.Process Map is helpful for building such knowledge successfully.Making decisions from such map in a highly complex situation requires more knowledge and resources.


Shiva: A Framework for Graph Based Ontology Matching

arXiv.org Artificial Intelligence

Since long, corporations are looking for knowledge sources which can provide structured description of data and can focus on meaning and shared understanding. Structures which can facilitate open world assumptions and can be flexible enough to incorporate and recognize more than one name for an entity. A source whose major purpose is to facilitate human communication and interoperability. Clearly, databases fail to provide these features and ontologies have emerged as an alternative choice, but corporations working on same domain tend to make different ontologies. The problem occurs when they want to share their data/knowledge. Thus we need tools to merge ontologies into one. This task is termed as ontology matching. This is an emerging area and still we have to go a long way in having an ideal matcher which can produce good results. In this paper we have shown a framework to matching ontologies using graphs.


Quality of Geographic Information: Ontological approach and Artificial Intelligence Tools

arXiv.org Artificial Intelligence

The objective is to present one important aspect of the European IST-FET project "REV!GIS"1: the methodology which has been developed for the translation (interpretation) of the quality of the data into a "fitness for use" information, that we can confront to the user needs in its application. This methodology is based upon the notion of "ontologies" as a conceptual framework able to capture the explicit and implicit knowledge involved in the application. We do not address the general problem of formalizing such ontologies, instead, we rather try to illustrate this with three applications which are particular cases of the more general "data fusion" problem. In each application, we show how to deploy our methodology, by comparing several possible solutions, and we try to enlighten where are the quality issues, and what kind of solution to privilege, even at the expense of a highly complex computational approach. The expectation of the REV!GIS project is that computationally tractable solutions will be available among the next generation AI tools.


The DL-Lite Family and Relations

arXiv.org Artificial Intelligence

The recently introduced series of description logics under the common moniker DL-Lite has attracted attention of the description logic and semantic web communities due to the low computational complexity of inference, on the one hand, and the ability to represent conceptual modeling formalisms, on the other. The main aim of this article is to carry out a thorough and systematic investigation of inference in extensions of the original DL-Lite logics along five axes: by (i) adding the Boolean connectives and (ii) number restrictions to concept constructs, (iii) allowing role hierarchies, (iv) allowing role disjointness, symmetry, asymmetry, reflexivity, irreflexivity and transitivity constraints, and (v) adopting or dropping the unique same assumption. We analyze the combined complexity of satisfiability for the resulting logics, as well as the data complexity of instance checking and answering positive existential queries. Our approach is based on embedding DL-Lite logics in suitable fragments of the one-variable first-order logic, which provides useful insights into their properties and, in particular, computational behavior.


Hypertableau Reasoning for Description Logics

arXiv.org Artificial Intelligence

We present a novel reasoning calculus for the description logic SHOIQ^+---a knowledge representation formalism with applications in areas such as the Semantic Web. Unnecessary nondeterminism and the construction of large models are two primary sources of inefficiency in the tableau-based reasoning calculi used in state-of-the-art reasoners. In order to reduce nondeterminism, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. In order to reduce the size of the constructed models, we introduce anywhere pairwise blocking. We also present an improved nominal introduction rule that ensures termination in the presence of nominals, inverse roles, and number restrictions---a combination of DL constructs that has proven notoriously difficult to handle. Our implementation shows significant performance improvements over state-of-the-art reasoners on several well-known ontologies.


Message-Based Web Service Composition, Integrity Constraints, and Planning under Uncertainty: A New Connection

arXiv.org Artificial Intelligence

Thanks to recent advances, AI Planning has become the underlying technique for several applications. Figuring prominently among these is automated Web Service Composition (WSC) at the "capability" level, where services are described in terms of preconditions and effects over ontological concepts. A key issue in addressing WSC as planning is that ontologies are not only formal vocabularies; they also axiomatize the possible relationships between concepts. Such axioms correspond to what has been termed "integrity constraints" in the actions and change literature, and applying a web service is essentially a belief update operation. The reasoning required for belief update is known to be harder than reasoning in the ontology itself. The support for belief update is severely limited in current planning tools. Our first contribution consists in identifying an interesting special case of WSC which is both significant and more tractable. The special case, which we term "forward effects", is characterized by the fact that every ramification of a web service application involves at least one new constant generated as output by the web service. We show that, in this setting, the reasoning required for belief update simplifies to standard reasoning in the ontology itself. This relates to, and extends, current notions of "message-based" WSC, where the need for belief update is removed by a strong (often implicit or informal) assumption of "locality" of the individual messages. We clarify the computational properties of the forward effects case, and point out a strong relation to standard notions of planning under uncertainty, suggesting that effective tools for the latter can be successfully adapted to address the former. Furthermore, we identify a significant sub-case, named "strictly forward effects", where an actual compilation into planning under uncertainty exists. This enables us to exploit off-the-shelf planning tools to solve message-based WSC in a general form that involves powerful ontologies, and requires reasoning about partial matches between concepts. We provide empirical evidence that this approach may be quite effective, using Conformant-FF as the underlying planner.


Variable Forgetting in Reasoning about Knowledge

arXiv.org Artificial Intelligence

In this paper, we investigate knowledge reasoning within a simple framework called knowledge structure. We use variable forgetting as a basic operation for one agent to reason about its own or other agents\ knowledge. In our framework, two notions namely agents\ observable variables and the weakest sufficient condition play important roles in knowledge reasoning. Given a background knowledge base and a set of observable variables for each agent, we show that the notion of an agent knowing a formula can be defined as a weakest sufficient condition of the formula under background knowledge base. Moreover, we show how to capture the notion of common knowledge by using a generalized notion of weakest sufficient condition. Also, we show that public announcement operator can be conveniently dealt with via our notion of knowledge structure. Further, we explore the computational complexity of the problem whether an epistemic formula is realized in a knowledge structure. In the general case, this problem is PSPACE-hard; however, for some interesting subcases, it can be reduced to co-NP. Finally, we discuss possible applications of our framework in some interesting domains such as the automated analysis of the well-known muddy children puzzle and the verification of the revised Needham-Schroeder protocol. We believe that there are many scenarios where the natural presentation of the available information about knowledge is under the form of a knowledge structure. What makes it valuable compared with the corresponding multi-agent S5 Kripke structure is that it can be much more succinct.


LB2CO: A Semantic Ontology Framework for B2C eCommerce Transaction on the Internet

arXiv.org Artificial Intelligence

Business ontology can enhance the successful development of complex enterprise system; this is being achieved through knowledge sharing and the ease of communication between every entity in the domain. Through human semantic interaction with the web resources, machines to interpret the data published in a machine interpretable form under web. However, the theoretical practice of business ontology in eCommerce domain is quite a few especially in the section of electronic transaction, and the various techniques used to obtain efficient communication across spheres are error prone and are not always guaranteed to be efficient in obtaining desired result due to poor semantic integration between entities. To overcome the poor semantic integration this research focuses on proposed ontology called LB2CO, which combines the framework of IDEF5 & SNAP as an analysis tool, for automated recommendation of product and services and create effective ontological framework for B2C transaction & communication across different business domains that facilitates the interoperability & integration of B2C transactions over the web.