Semantic Web

Ontologies-based Architecture for Sociocultural Knowledge Co-Construction Systems Artificial Intelligence

Considering the evolution of the semantic wiki engine based platforms, two main approaches could be distinguished: Ontologies for Wikis (OfW) and Wikis for Ontologies (WfO). OfW vision requires existing ontologies to be imported. Most of them use the RDF-based (Resource Description Framework) systems in conjunction with the standard SQL (Structured Query Language) database to manage and query semantic data. But, relational database is not an ideal type of storage for semantic data. A more natural data model for SMW (Semantic MediaWiki) is RDF, a data format that organizes information in graphs rather than in fixed database tables. This paper presents an ontology based architecture, which aims to implement this idea. The architecture mainly includes three layered functional architectures: Web User Interface Layer, Semantic Layer and Persistence Layer. Introduction This research study is set in an African context, where the main problem is an economic, social development and the means to achieve it. Indeed, after the failure of several development models in the recent decades, theoretical research seems to be turning to the development knowledgebased approaches (UNESCO, 2014). The place of knowledge, science and technology in the current dynamics of growth gives rise to intensify the reflection within the economic field.

In Between Years. The Year of the Graph Newsletter: January 2019


In between years, or zwischen den Jahren, is a German expression for the period between Christmas and New Year. This is traditionally a time of year when not much happens, and this playful expression lingers itself in between the literal and the metaphoric. As the first edition of the Year of the Graph newsletter is here, a short retrospective may be due in addition to the usual updates. When we called 2018 the Year of the Graph, we did not have to wait for the Gartners of the world to verify what we saw coming. We can without a doubt say this has been the Year Graphs went mainstream.

Towards Compositional Distributional Discourse Analysis Artificial Intelligence

In the last couple of decades, the traditional symbolic approach to AI and cognitive science -- which aims at characterising human intelligence in terms of abstract logical processes -- has been challenged by so-called connectionist AI: the study of the human brain as a complex network of basic processing units [18]. When it comes to human language, the same divide manifests itself as the opposition between two principles, which in turn induce two distinct approaches to Natural Language Processing (NLP). On one hand Frege's principle of compositionality asserts that the meaning of a complex expression is a function of its sub-expressions, and the way in which they are composed -- distributionality on the other hand can be summed up in Firth's maxim "You shall know a word by the company it keeps". Once implemented in terms of concrete algorithms we have expert systems driven by formal logical rules on one end, artificial neural networks and machine learning on the other. Categorical Compositional Distributional (DisCoCat) models, first introduced in [4], aim at getting the best of both worlds: the string diagrams notation borrowed from category theory allows to manipulate the grammatical reductions as linear maps, and compute graphically the semantics of a sentence as the composition of the vectors which we obtain from the distributional semantics of its constituent words. In this paper, we introduce basic anaphoric discourses as mid-level representations between natural language discourse on one end -- formalised in terms of basic discourse representation structures (DRS) [2]; and knowledge queries over the Semantic Web on the other -- given by basic graph patterns in the Resource Description Framework (RDF) [19]. We construct discourses as formal diagrams of real-valued matrices and we then use these diagrams to give abstract reformulations of NLP problems: probabilistic anaphora resolution and question answering.

Infrastructure for the representation and electronic exchange of design knowledge Artificial Intelligence

This paper develops the concept of knowledge and its exchange using Semantic Web technologies. It points out that knowledge is more than information because it embodies the meaning, that is to say semantic and context. These characteristics will influence our approach to represent and to treat the knowledge. In order to be adopted, the developed system needs to be simple and to use standards. The goal of the paper is to find standards to model knowledge and exchange it with an other person. Therefore, we propose to model knowledge using UML models to show a graphical representation and to exchange it with XML to ensure the portability at low cost. We introduce the concept of ontology for organizing knowledge and for facilitating the knowledge exchange. Proposals have been tested by implementing an application on the design knowledge of a pen.

2018 Semantic Web Challenge winners announced


Elsevier, the global information analytics business specializing in science and health, is pleased to announce the winner of the 2018 Semantic Web Challenge (SWC). The winner was recently announced at the 17th International Semantic Web Conference held in Monterey County, California, USA. The challenge and allocated prize were sponsored by Elsevier. The Semantic Web Challenge is a highly-prestigious, and the longest-running, competition fostering scientific progress in the field of artificial intelligence on the web. The semantic web and the use of linked data extends the current human-readable web by encoding some of the semantics of resources in a machine-readable form.

A Methodology for Search Space Reduction in QoS Aware Semantic Web Service Composition Artificial Intelligence

The semantic information regulates the expressiveness of a web service. State-of-the-art approaches in web services research have used the semantics of a web service for different purposes, mainly for service discovery, composition, execution etc. In this paper, our main focus is on semantic driven Quality of Service (QoS) aware service composition. Most of the contemporary approaches on service composition have used the semantic information to combine the services appropriately to generate the composition solution. However, in this paper, our intention is to use the semantic information to expedite the service composition algorithm. Here, we present a service composition framework that uses semantic information of a web service to generate different clusters, where the services are semantically related within a cluster. Our final aim is to construct a composition solution using these clusters that can efficiently scale to large service spaces, while ensuring solution quality. Experimental results show the efficiency of our proposed method.

Rule-based OWL Modeling with ROWLTab Protege Plugin Artificial Intelligence

It has been argued that it is much easier to convey logical statements using rules rather than OWL (or description logic (DL)) axioms. Based on recent theoretical developments on transformations between rules and DLs, we have developed ROWLTab, a Protege plugin that allows users to enter OWL axioms by way of rules; the plugin then automatically converts these rules into OWL 2 DL axioms if possible, and prompts the user in case such a conversion is not possible without weakening the semantics of the rule. In this paper, we present ROWLTab, together with a user evaluation of its effectiveness compared to entering axioms using the standard Protege interface. Our evaluation shows that modeling with ROWLTab is much quicker than the standard interface, while at the same time, also less prone to errors for hard modeling tasks.

Lehigh research team to investigate a 'Google for research data'


IMAGE: Brian Davison, Associate Professor of Computer Science Engineering at Lehigh University, is principal investigator of an NSF-backed project to develop a search engine intended to help scientists and others locate... view more There was a time--not that long ago--when the phrases "Google it" or "check Yahoo" would have been interpreted as sneezes, or a perhaps symptoms of an oncoming seizure, rather than as coherent thoughts. Today, these are key to answering all of life's questions. It's one thing to use the Web to keep up with a Kardashian, shop for ironic T-shirts, argue with our in-laws about politics, or any of the other myriad ways we use the Web in today's world. But if you are a serious researcher looking for real data that can help you advance your ideas, how useful are the underlying technologies that support the search engines we've all come to take for granted? "Not very," says Brian Davison, associate professor of computer science at Lehigh University.

Semantic web technologies to build intelligent applications


Mathieu d'Aquin is a Professor of Informatics specialised in data analytics and semantic technologies at the Insight Centre for Data Analytics of the National University of Ireland Galway. He was previously Senior Research Fellow at the Knowledge Media Institute of the Open University, where he led the Data Science Group. In this interview, he speaks about research on semantic web technologies and specific application of web data technologies, which are two key areas of his work interest. You have been working for years on Semantic Web/Linked Data technologies. What will shape our future the most?

Datalog: Bag Semantics via Set Semantics Artificial Intelligence

Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog (the so-called warded Datalog+-) under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. Moreover, this translation has the potential for further extensions -- above all to capture the bag semantics of the semantic web query language SPARQL.