Semantic Web


Lehigh research team to investigate a 'Google for research data'

#artificialintelligence

IMAGE: Brian Davison, Associate Professor of Computer Science Engineering at Lehigh University, is principal investigator of an NSF-backed project to develop a search engine intended to help scientists and others locate... view more There was a time--not that long ago--when the phrases "Google it" or "check Yahoo" would have been interpreted as sneezes, or a perhaps symptoms of an oncoming seizure, rather than as coherent thoughts. Today, these are key to answering all of life's questions. It's one thing to use the Web to keep up with a Kardashian, shop for ironic T-shirts, argue with our in-laws about politics, or any of the other myriad ways we use the Web in today's world. But if you are a serious researcher looking for real data that can help you advance your ideas, how useful are the underlying technologies that support the search engines we've all come to take for granted? "Not very," says Brian Davison, associate professor of computer science at Lehigh University.


Semantic web technologies to build intelligent applications

#artificialintelligence

Mathieu d'Aquin is a Professor of Informatics specialised in data analytics and semantic technologies at the Insight Centre for Data Analytics of the National University of Ireland Galway. He was previously Senior Research Fellow at the Knowledge Media Institute of the Open University, where he led the Data Science Group. In this interview, he speaks about research on semantic web technologies and specific application of web data technologies, which are two key areas of his work interest. You have been working for years on Semantic Web/Linked Data technologies. What will shape our future the most?


Datalog: Bag Semantics via Set Semantics

arXiv.org Artificial Intelligence

Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog (the so-called warded Datalog+-) under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. Moreover, this translation has the potential for further extensions -- above all to capture the bag semantics of the semantic web query language SPARQL.


Why Semantic Technologies? - PoolParty Semantic Suite

#artificialintelligence

Data is one of the most valuable resources of any organization. The Economist named it the oil of the digital era. Unstructured data like the one coming from social media, emails, reports, word documents, among others, is up to 90% of enterprise data*. With semantic web technologies, companies can break down data silos and use information assets in an agile way. It is a cost-efficient solution that does not replace but boosts existing IT systems.


15th Extended Semantic Web Conference (ESWC), Heraklion 2018

VideoLectures.NET

The goal of the Semantic Web is to create a Web of knowledge and services in which the semantics of content is made explicit and content is linked to both other content and services allowing novel applications to combine content from heterogeneous sites in unforeseen ways and support enhanced matching between users needs and content. This network of knowledge-based functionality will weave together a large network of human knowledge, and make this knowledge machine-processable to support intelligent behaviour by machines. Creating such an interlinked Web of knowledge which spans unstructured text, structured data (e.g. RDF) as well as multimedia content and services requires the collaboration of many disciplines, including but not limited to: Artificial Intelligence, Natural Language Processing, Databases and Information Systems, Information Retrieval, Machine Learning, Multimedia, Distributed Systems, Social Networks, Web Engineering, and Web Science.


Feature-based reformulation of entities in triple pattern queries

arXiv.org Artificial Intelligence

Knowledge graphs encode uniquely identifiable entities to other entities or literal values by means of relationships, thus enabling semantically rich querying over the stored data. Typically, the semantics of such queries are often crisp thereby resulting in crisp answers. Query log statistics show that a majority of the queries issued to knowledge graphs are often entity centric queries. When a user needs additional answers the state-of-the-art in assisting users is to rewrite the original query resulting in a set of approximations. Several strategies have been proposed in past to address this. They typically move up the taxonomy to relax a specific element to a more generic element. Entities don't have a taxonomy and they end up being generalized. To address this issue, in this paper, we propose an entity centric reformulation strategy that utilizes schema information and entity features present in the graph to suggest rewrites. Once the features are identified, the entity in concern is reformulated as a set of features. Since entities can have a large number of features, we introduce strategies that select the top-k most relevant and {informative ranked features and augment them to the original query to create a valid reformulation. We then evaluate our approach by showing that our reformulation strategy produces results that are more informative when compared with state-of-the-art


Semantic Technologies Are Steering Cognitive Applications

#artificialintelligence

Cognitive applications are being applied to a wide variety of uses and across various industries. Based on statistical and rule-based methods, they are excellent to process a large volume of information. But many companies are battling with the imprecise results this technology delivers. Complex algorithms to simulate how the human brain works lead data scientists to a bottleneck for taking cognitive computing to the next level.


The rise of autonomous systems will change the world

#artificialintelligence

Harald Sack is Professor for Information Services Engineering at two of the most renowned research institutions in Europe: FIZ Karlsruhe and AIFB. He is a part of SEMANTiCS' research and innovation track program committee as well as of the conference's permanent advisory board. His publications include more than 130 papers in international journals and conferences and three standard textbooks on networking technologies. In this interview he speaks about the limited capabilities of search engines, the necessity of data being open and the coffee culture in Vienna. You have been working in many research areas such as semantic web technologies, knowledge representations, multimedia analysis & retrieval.


Simplified SPARQL REST API - CRUD on JSON Object Graphs via URI Paths

arXiv.org Artificial Intelligence

Within the Semantic Web community, SPARQL is one of the predominant languages to query and update RDF knowledge. However, the complexity of SPARQL, the underlying graph structure and various encodings are common sources of confusion for Semantic Web novices. In this paper we present a general purpose approach to convert any given SPARQL endpoint into a simple to use REST API. To lower the initial hurdle, we represent the underlying graph as an interlinked view of nested JSON objects that can be traversed by the API path.


Researcher (f/m) - Semantic Technology and Artificial Intelligence

#artificialintelligence

The Semantic Web Company (SWC) is a leading provider of software and services in the areas of Semantic Information Management, Machine Learning, Natural Language Processing, and Linked Data technologies. SWC's renowned PoolParty Semantic Suite software platform is used in large enterprises, Government Organizations, NPOs and NGOs around the globe to extract meaning from big data.