Ontologies form the basic interest in various computer science disciplines such as semantic web, information retrieval, database design, etc. They aim at providing a formal, explicit and shared conceptualization and understanding of common domains between different communities. In addition, they allow for concepts and their constraints of a specific domain to be explicitly defined. However, the distributed nature of ontology development and the differences in viewpoints of the ontology engineers have resulted in the so called "semantic heterogeneity" between ontologies. Semantic heterogeneity constitutes the major obstacle against achieving interoperability between ontologies. To overcome this obstacle, we present a multi-purpose framework which exploits the WordNet generic knowledge base for: i) Discovering and correcting the incorrect semantic relations between the concepts of the ontology in a specific domain. This step is a primary step of ontology merging. ii) Merging domain-specific ontologies through computing semantic relations between their concepts. iii) Handling the issue of missing concepts in WordNet through the acquisition of statistical information on the Web. And iv) Enriching WordNet with these missing concepts. An experimental instantiation of the framework and comparisons with state-of-the-art syntactic and semantic-based systems validate our proposal.
In this paper we present a new method for ontology selection in a reuse context. The novel feature of this method is the iterative selection of the reused ontologies. Ontology selection is guided by the user according to his requirements and his perception to the target domain. Starting from a ﬁrst selected ontology, the concepts with the weakest density are identiﬁed then the ontology developer is enabled to choose among them the ones to be reﬁned in order to cover a speciﬁc scope of the domain.
Ontologies are the prime way of organizing data in the Semantic Web. Often, it is necessary to combine several, independently developed ontologies to obtain a knowledge graph fully representing a domain of interest. The complementarity of existing ontologies can be leveraged by merging them. Existing approaches for ontology merging mostly implement a binary merge. However, with the growing number and size of relevant ontologies across domains, scalability becomes a central challenge. A multi-ontology merging technique offers a potential solution to this problem. We present CoMerger, a scalable multiple ontologies merging method. For efficient processing, rather than successively merging complete ontologies pairwise, we group related concepts across ontologies into partitions and merge first within and then across those partitions. The experimental results on well-known datasets confirm the feasibility of our approach and demonstrate its superiority over binary strategies. A prototypical implementation is freely accessible through a live web portal.
In this article, we discuss some issues that arise when ontologies are used to support corporate application domains such as electronic commerce (ecommerce) and some technical problems in deploying ontologies for real-world use. In particular, we focus on issues of ontology integration and the related problem of semantic mapping, that is, the mapping of ontologies and taxonomies to reference ontologies to preserve semantics. Along the way, we discuss what typically constitutes an ontology architecture. We situate the discussion in the domain of business-to-business (B2B) e-commerce. By its very nature, B2B e-commerce must try to interlink buyers and sellers from multiple companies with disparate product-description terminologies and meanings, thus serving as a paradigmatic case for the use of ontologies to support corporate applications.
Editor's Note: An update to this article has been posted here on 7/14/04. As the hype of past decades fades, the current heir to the artificial intelligence legacy may well be ontologies. Evolving from semantic network notions, modern ontologies are proving quite useful. And they are doing so without relying on the jumble of rule-based techniques common in earlier knowledge representation efforts. These structured depictions or models of known (and accepted) facts are being built today to make a number of applications more capable of handling complex and disparate information.