The World Wide Web changed the way we live our lives, most notably in the ways we now share, consume and find information. There are many more webpages now than there are people, and links connect these webpages to each other in a giant network that is accessible from your favorite browser.
A downside of this success is that now there’s too much information, so much in fact, that we need machines to intelligently read these webpages and answer our questions. The Semantic Web is a movement and research community that brings together experts from different areas, examples being natural language processing, ontologies, databases, social media, networks and logic, to realize the vision of making the Web machine-readable.
Why is this such a difficult problem? The main reason is that much of the Web, even today, is in a natural language like English or French. These languages are very ambiguous, but we humans have a knack for understanding them due to a variety of factors, not the least of which is our immense store of background knowledge and common sense. Machines are not yet capable of understanding English at the same level as an adult human being, though impressive progress is being made.
To overcome this problem, the Semantic Web presents a vision of the Web as an interlinked network of concepts, relationships and entities, rather than an interlinked network of ‘natural’ webpages. Intelligent systems, often called ‘agents’, can consume the Semantic Web and answer complex questions that now require human labor. The research in the Semantic Web also helps search; e.g. the Google Knowledge Graph, which uses Semantic Web technology, can help you to answer some of your questions without even clicking on a link!
Nature has been an inspiration for many innovations throughout history. Neuroscience and the brain have inspired the development of deep learning theories and applications and led to great performances of artificial vision systems. However, the well- known neuroscience theories have not yet been utilized by artificial vision, not to mention the undiscovered ones. Feature Parallelism Model is widely inspired by nature. It is a vision model that conceptualizes unutilized science facts about the human visual system such as the Feature Integration Theory of visual attention "FIT".
Digital agriculture increasingly relies on the generation of large quantity of images. These images are processed with machine learning techniques to speed up the identification of objects, their classification, visualization, and interpretation. However, images must comply with the FAIR principles to facilitate their access, reuse, and interoperability. As stated in recent paper authored by the Planteome team (Trigkakis et al, 2018), "Plant researchers could benefit greatly from a trained classification model that predicts image annotations with a high degree of accuracy." In this third Ontologies Community of Practice webinar, Justin Preece, Senior Faculty Research Assistant Oregon State University, presents the module developed by the Planteome project using the Bio-Image Semantic Query User Environment (BISQUE), an online image analysis and storage platform of Cyverse.
Numerous success use cases involving deep learning have recently started to be propagated to the Semantic Web. Approaches range from utilizing structured knowledge in the training process of neural networks to enriching such architectures with ontological reasoning mechanisms. Bridging the neural-symbolic gap by joining deep learning and Semantic Web not only holds the potential of improving performance but also of opening up new avenues of research. This editorial introduces the Semantic Web Journal special issue on Semantic Deep Learning, which brings together Semantic Web and deep learning research. After a general introduction to the topic and a brief overview of recent contributions, we continue to introduce the submissions published in this special issue.
The disruptive potential of the upcoming digital transformations for the industrial manufacturing domain have led to several reference frameworks and numerous standardization approaches. On the other hand, the Semantic Web community has made significant contributions in the field, for instance on data and service description, integration of heterogeneous sources and devices, and AI techniques in distributed systems. These two streams of work are, however, mostly unrelated and only briefly regard each others requirements, practices and terminology. We contribute to closing this gap by providing the Semantic Asset Administration Shell, an RDF-based representation of the Industrie 4.0 Component. We provide an ontology for the latest data model specification, created a RML mapping, supply resources to validate the RDF entities and introduce basic reasoning on the Asset Administration Shell data model. Furthermore, we discuss the different assumptions and presentation patterns, and analyze the implications of a semantic representation on the original data. We evaluate the thereby created overheads, and conclude that the semantic lifting is manageable, also for restricted or embedded devices, and therefore meets the needs of Industrie 4.0 scenarios.
Considering the evolution of the semantic wiki engine based platforms, two main approaches could be distinguished: Ontologies for Wikis (OfW) and Wikis for Ontologies (WfO). OfW vision requires existing ontologies to be imported. Most of them use the RDF-based (Resource Description Framework) systems in conjunction with the standard SQL (Structured Query Language) database to manage and query semantic data. But, relational database is not an ideal type of storage for semantic data. A more natural data model for SMW (Semantic MediaWiki) is RDF, a data format that organizes information in graphs rather than in fixed database tables. This paper presents an ontology based architecture, which aims to implement this idea. The architecture mainly includes three layered functional architectures: Web User Interface Layer, Semantic Layer and Persistence Layer. Introduction This research study is set in an African context, where the main problem is an economic, social development and the means to achieve it. Indeed, after the failure of several development models in the recent decades, theoretical research seems to be turning to the development knowledgebased approaches (UNESCO, 2014). The place of knowledge, science and technology in the current dynamics of growth gives rise to intensify the reflection within the economic field.
In between years, or zwischen den Jahren, is a German expression for the period between Christmas and New Year. This is traditionally a time of year when not much happens, and this playful expression lingers itself in between the literal and the metaphoric. As the first edition of the Year of the Graph newsletter is here, a short retrospective may be due in addition to the usual updates. When we called 2018 the Year of the Graph, we did not have to wait for the Gartners of the world to verify what we saw coming. We can without a doubt say this has been the Year Graphs went mainstream.
In the last couple of decades, the traditional symbolic approach to AI and cognitive science -- which aims at characterising human intelligence in terms of abstract logical processes -- has been challenged by so-called connectionist AI: the study of the human brain as a complex network of basic processing units . When it comes to human language, the same divide manifests itself as the opposition between two principles, which in turn induce two distinct approaches to Natural Language Processing (NLP). On one hand Frege's principle of compositionality asserts that the meaning of a complex expression is a function of its sub-expressions, and the way in which they are composed -- distributionality on the other hand can be summed up in Firth's maxim "You shall know a word by the company it keeps". Once implemented in terms of concrete algorithms we have expert systems driven by formal logical rules on one end, artificial neural networks and machine learning on the other. Categorical Compositional Distributional (DisCoCat) models, first introduced in , aim at getting the best of both worlds: the string diagrams notation borrowed from category theory allows to manipulate the grammatical reductions as linear maps, and compute graphically the semantics of a sentence as the composition of the vectors which we obtain from the distributional semantics of its constituent words. In this paper, we introduce basic anaphoric discourses as mid-level representations between natural language discourse on one end -- formalised in terms of basic discourse representation structures (DRS) ; and knowledge queries over the Semantic Web on the other -- given by basic graph patterns in the Resource Description Framework (RDF) . We construct discourses as formal diagrams of real-valued matrices and we then use these diagrams to give abstract reformulations of NLP problems: probabilistic anaphora resolution and question answering.
This paper develops the concept of knowledge and its exchange using Semantic Web technologies. It points out that knowledge is more than information because it embodies the meaning, that is to say semantic and context. These characteristics will influence our approach to represent and to treat the knowledge. In order to be adopted, the developed system needs to be simple and to use standards. The goal of the paper is to find standards to model knowledge and exchange it with an other person. Therefore, we propose to model knowledge using UML models to show a graphical representation and to exchange it with XML to ensure the portability at low cost. We introduce the concept of ontology for organizing knowledge and for facilitating the knowledge exchange. Proposals have been tested by implementing an application on the design knowledge of a pen.
Elsevier, the global information analytics business specializing in science and health, is pleased to announce the winner of the 2018 Semantic Web Challenge (SWC). The winner was recently announced at the 17th International Semantic Web Conference held in Monterey County, California, USA. The challenge and allocated prize were sponsored by Elsevier. The Semantic Web Challenge is a highly-prestigious, and the longest-running, competition fostering scientific progress in the field of artificial intelligence on the web. The semantic web and the use of linked data extends the current human-readable web by encoding some of the semantics of resources in a machine-readable form.
The semantic information regulates the expressiveness of a web service. State-of-the-art approaches in web services research have used the semantics of a web service for different purposes, mainly for service discovery, composition, execution etc. In this paper, our main focus is on semantic driven Quality of Service (QoS) aware service composition. Most of the contemporary approaches on service composition have used the semantic information to combine the services appropriately to generate the composition solution. However, in this paper, our intention is to use the semantic information to expedite the service composition algorithm. Here, we present a service composition framework that uses semantic information of a web service to generate different clusters, where the services are semantically related within a cluster. Our final aim is to construct a composition solution using these clusters that can efficiently scale to large service spaces, while ensuring solution quality. Experimental results show the efficiency of our proposed method.