The World Wide Web changed the way we live our lives, most notably in the ways we now share, consume and find information. There are many more webpages now than there are people, and links connect these webpages to each other in a giant network that is accessible from your favorite browser.
A downside of this success is that now there’s too much information, so much in fact, that we need machines to intelligently read these webpages and answer our questions. The Semantic Web is a movement and research community that brings together experts from different areas, examples being natural language processing, ontologies, databases, social media, networks and logic, to realize the vision of making the Web machine-readable.
Why is this such a difficult problem? The main reason is that much of the Web, even today, is in a natural language like English or French. These languages are very ambiguous, but we humans have a knack for understanding them due to a variety of factors, not the least of which is our immense store of background knowledge and common sense. Machines are not yet capable of understanding English at the same level as an adult human being, though impressive progress is being made.
To overcome this problem, the Semantic Web presents a vision of the Web as an interlinked network of concepts, relationships and entities, rather than an interlinked network of ‘natural’ webpages. Intelligent systems, often called ‘agents’, can consume the Semantic Web and answer complex questions that now require human labor. The research in the Semantic Web also helps search; e.g. the Google Knowledge Graph, which uses Semantic Web technology, can help you to answer some of your questions without even clicking on a link!
This paper introduces our position on the critical issue of bias that recently appeared in AI applications. Specifically, we discuss the combination of current technologies used in AI applications i.e., Machine Learning and Knowledge Graphs, and point to their involvement in (de)biased applications of the C4I domain. Although this is a wider problem that currently emerges from different application domains, bias appears more critical in C4I than in others due to its security-related nature. While proposing certain actions to be taken towards debiasing C4I applications, we acknowledge the immature aspect of this topic within the Knowledge Graph and Semantic Web communities.
Machine learning techniques are gaining attention in the context of intrusion detection due to the increasing amounts of data generated by monitoring tools, as well as the sophistication displayed by attackers in hiding their activity. However, existing methods often exhibit important limitations in terms of the quantity and relevance of the generated alerts. Recently, knowledge graphs are finding application in the cybersecurity domain, showing the potential to alleviate some of these drawbacks thanks to their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies. We discuss the application of machine learning on knowledge graphs for intrusion detection and experimentally evaluate a link-prediction method for scoring anomalous activity in industrial systems. After initial unsupervised training, the proposed method is shown to produce intuitively well-calibrated and interpretable alerts in a diverse range of scenarios, hinting at the potential benefits of relational machine learning on knowledge graphs for intrusion detection purposes.
Several approaches to the problem of expert finding have emerged in computer science research. In this work, three of these approaches - content analysis, social graph analysis and the use of Semantic Web technologies are examined. An integrated set of system requirements is then developed that uses all three approaches in one hybrid approach. To show the practicality of this hybrid approach, a usable prototype expert finding system called ExpertQuest is developed using a modern functional programming language (Clojure) to query social media data and Linked Data. This system is evaluated and discussed. Finally, a discussion and conclusions are presented which describe the benefits and shortcomings of the hybrid approach and the technologies used in this work.
Welcome to DeepCube – a Horizon 2020 Space project that will unlock the potential of big Copernicus data with Artificial Intelligence and Semantic Web technologies, with the objective to address problems of high environmental and societal impact. Taken from the Coast Guard helicopter. The southern end of the lava flow is about 2.6 km from Suðurstrandarvegur. According to initial information, the fissure is about 200 m long. The website of the EU project DeepCube is up and it looks amazing!
Personal health libraries (PHLs) provide a single point of secure access to patients digital health data and enable the integration of knowledge stored in their digital health profiles with other sources of global knowledge. PHLs can help empower caregivers and health care providers to make informed decisions about patients health by understanding medical events in the context of their lives. This paper reports the implementation of a mobile health digital intervention that incorporates both digital health data stored in patients PHLs and other sources of contextual knowledge to deliver tailored recommendations for improving self-care behaviors in diabetic adults. We conducted a thematic assessment of patient functional and nonfunctional requirements that are missing from current EHRs based on evidence from the literature. We used the results to identify the technologies needed to address those requirements. We describe the technological infrastructures used to construct, manage, and integrate the types of knowledge stored in the PHL. We leverage the Social Linked Data (Solid) platform to design a fully decentralized and privacy-aware platform that supports interoperability and care integration. We provided an initial prototype design of a PHL and drafted a use case scenario that involves four actors to demonstrate how the proposed prototype can be used to address user requirements, including the construction and management of the PHL and its utilization for developing a mobile app that queries the knowledge stored and integrated into the PHL in a private and fully decentralized manner to provide better recommendations. The proposed PHL helps patients and their caregivers take a central role in making decisions regarding their health and equips their health care providers with informatics tools that support the collection and interpretation of the collected knowledge.
Semantic web technologies have shown their effectiveness, especially when it comes to knowledge representation, reasoning, and data integrations. However, the original semantic web vision, whereby machine readable web data could be automatically actioned upon by intelligent software web agents, has yet to be realised. In order to better understand the existing technological challenges and opportunities, in this paper we examine the status quo in terms of intelligent software web agents, guided by research with respect to requirements and architectural components, coming from that agents community. We start by collating and summarising requirements and core architectural components relating to intelligent software agent. Following on from this, we use the identified requirements to both further elaborate on the semantic web agent motivating use case scenario, and to summarise different perspectives on the requirements when it comes to semantic web agent literature. Finally, we propose a hybrid semantic web agent architecture, discuss the role played by existing semantic web standards, and point to existing work in the broader semantic web community any beyond that could help us to make the semantic web agent vision a reality.
Abbas, Nacira, Alghamdi, Kholoud, Alinam, Mortaza, Alloatti, Francesca, Amaral, Glenda, d'Amato, Claudia, Asprino, Luigi, Beno, Martin, Bensmann, Felix, Biswas, Russa, Cai, Ling, Capshaw, Riley, Carriero, Valentina Anita, Celino, Irene, Dadoun, Amine, De Giorgis, Stefano, Delva, Harm, Domingue, John, Dumontier, Michel, Emonet, Vincent, van Erp, Marieke, Arias, Paola Espinoza, Fallatah, Omaima, Ferrada, Sebastián, Ocaña, Marc Gallofré, Georgiou, Michalis, Gesese, Genet Asefa, Gillis-Webber, Frances, Giovannetti, Francesca, Buey, Marìa Granados, Harrando, Ismail, Heibi, Ivan, Horta, Vitor, Huber, Laurine, Igne, Federico, Jaradeh, Mohamad Yaser, Keshan, Neha, Koleva, Aneta, Koteich, Bilal, Kurniawan, Kabul, Liu, Mengya, Ma, Chuangtao, Maas, Lientje, Mansfield, Martin, Mariani, Fabio, Marzi, Eleonora, Mesbah, Sepideh, Mistry, Maheshkumar, Tirado, Alba Catalina Morales, Nguyen, Anna, Nguyen, Viet Bach, Oelen, Allard, Pasqual, Valentina, Paulheim, Heiko, Polleres, Axel, Porena, Margherita, Portisch, Jan, Presutti, Valentina, Pustu-Iren, Kader, Mendez, Ariam Rivas, Roshankish, Soheil, Rudolph, Sebastian, Sack, Harald, Sakor, Ahmad, Salas, Jaime, Schleider, Thomas, Shi, Meilin, Spinaci, Gianmarco, Sun, Chang, Tietz, Tabea, Dhouib, Molka Tounsi, Umbrico, Alessandro, Berg, Wouter van den, Xu, Weiqin
One of the grand challenges discussed during the Dagstuhl Seminar "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web" and described in its report is that of a: "Public FAIR Knowledge Graph of Everything: We increasingly see the creation of knowledge graphs that capture information about the entirety of a class of entities. [...] This grand challenge extends this further by asking if we can create a knowledge graph of "everything" ranging from common sense concepts to location based entities. This knowledge graph should be "open to the public" in a FAIR manner democratizing this mass amount of knowledge." Although linked open data (LOD) is one knowledge graph, it is the closest realisation (and probably the only one) to a public FAIR Knowledge Graph (KG) of everything. Surely, LOD provides a unique testbed for experimenting and evaluating research hypotheses on open and FAIR KG. One of the most neglected FAIR issues about KGs is their ongoing evolution and long term preservation. We want to investigate this problem, that is to understand what preserving and supporting the evolution of KGs means and how these problems can be addressed. Clearly, the problem can be approached from different perspectives and may require the development of different approaches, including new theories, ontologies, metrics, strategies, procedures, etc. This document reports a collaborative effort performed by 9 teams of students, each guided by a senior researcher as their mentor, attending the International Semantic Web Research School (ISWS 2019). Each team provides a different perspective to the problem of knowledge graph evolution substantiated by a set of research questions as the main subject of their investigation. In addition, they provide their working definition for KG preservation and evolution.
Deep learning is great, but no, it won't be able to do everything. The only way to make progress in AI is to put together building blocks that are there already, but no current AI system combines. Adding knowledge to the mix, getting over prejudice against "good old AI", and scaling it up, are all necessary steps in the long and winding road to reboot AI. This is a summary of the thesis taken by scientist, best-selling author, and entrepreneur Gary Marcus towards rebooting AI. Marcus, a cognitive scientist by training, has been doing interdisciplinary work on the nature of intelligence -- artificial or otherwise -- more or less since his childhood.
Badenes-Olmedo, Carlos, Chaves-Fraga, David, Poveda-VillalÓn, MarÍa, Iglesias-Molina, Ana, Calleja, Pablo, Bernardos, Socorro, MartÍn-Chozas, Patricia, Fernández-Izquierdo, Alba, Amador-Domínguez, Elvira, Espinoza-Arias, Paola, Pozo, Luis, Ruckhaus, Edna, González-Guardia, Esteban, Cedazo, Raquel, López-Centeno, Beatriz, Corcho, Oscar
In the absence of sufficient medication for COVID patients due to the increased demand, disused drugs have been employed or the doses of those available were modified by hospital pharmacists. Some evidences for the use of alternative drugs can be found in the existing scientific literature that could assist in such decisions. However, exploiting large corpus of documents in an efficient manner is not easy, since drugs may not appear explicitly related in the texts and could be mentioned under different brand names. Drugs4Covid combines word embedding techniques and semantic web technologies to enable a drug-oriented exploration of large medical literature. Drugs and diseases are identified according to the ATC classification and MeSH categories respectively. More than 60K articles and 2M paragraphs have been processed from the CORD-19 corpus with information of COVID-19, SARS, and other related coronaviruses. An open catalogue of drugs has been created and results are publicly available through a drug browser, a keyword-guided text explorer, and a knowledge graph.
Cyber-Physical Systems (CPS) play a crucial role in the era of the 4thIndustrial Revolution. Recently, the application of the CPS to industrial manufacturing leads to a specialization of them referred as Cyber-Physical Production Systems (CPPS). Among other challenges, CPS and CPPS should be able to address interoperability issues, since one of their intrinsic requirement is the capability to interface and cooperate with other systems. On the other hand, to fully realize theIndustry 4.0 vision, it is required to address horizontal, vertical, and end-to-end integration enabling a complete awareness through the entire supply chain. In this context, Semantic Web standards and technologies may have a promising role to represent manufacturing knowledge in a machine-interpretable way for enabling communications among heterogeneous Industrial assets.