The World Wide Web changed the way we live our lives, most notably in the ways we now share, consume and find information. There are many more webpages now than there are people, and links connect these webpages to each other in a giant network that is accessible from your favorite browser.
A downside of this success is that now there’s too much information, so much in fact, that we need machines to intelligently read these webpages and answer our questions. The Semantic Web is a movement and research community that brings together experts from different areas, examples being natural language processing, ontologies, databases, social media, networks and logic, to realize the vision of making the Web machine-readable.
Why is this such a difficult problem? The main reason is that much of the Web, even today, is in a natural language like English or French. These languages are very ambiguous, but we humans have a knack for understanding them due to a variety of factors, not the least of which is our immense store of background knowledge and common sense. Machines are not yet capable of understanding English at the same level as an adult human being, though impressive progress is being made.
To overcome this problem, the Semantic Web presents a vision of the Web as an interlinked network of concepts, relationships and entities, rather than an interlinked network of ‘natural’ webpages. Intelligent systems, often called ‘agents’, can consume the Semantic Web and answer complex questions that now require human labor. The research in the Semantic Web also helps search; e.g. the Google Knowledge Graph, which uses Semantic Web technology, can help you to answer some of your questions without even clicking on a link!
Cognitive applications are being applied to a wide variety of uses and across various industries. Based on statistical and rule-based methods, they are excellent to process a large volume of information. But many companies are battling with the imprecise results this technology delivers. Complex algorithms to simulate how the human brain works lead data scientists to a bottleneck for taking cognitive computing to the next level.
Harald Sack is Professor for Information Services Engineering at two of the most renowned research institutions in Europe: FIZ Karlsruhe and AIFB. He is a part of SEMANTiCS' research and innovation track program committee as well as of the conference's permanent advisory board. His publications include more than 130 papers in international journals and conferences and three standard textbooks on networking technologies. In this interview he speaks about the limited capabilities of search engines, the necessity of data being open and the coffee culture in Vienna. You have been working in many research areas such as semantic web technologies, knowledge representations, multimedia analysis & retrieval.
Within the Semantic Web community, SPARQL is one of the predominant languages to query and update RDF knowledge. However, the complexity of SPARQL, the underlying graph structure and various encodings are common sources of confusion for Semantic Web novices. In this paper we present a general purpose approach to convert any given SPARQL endpoint into a simple to use REST API. To lower the initial hurdle, we represent the underlying graph as an interlinked view of nested JSON objects that can be traversed by the API path.
The Semantic Web Company (SWC) is a leading provider of software and services in the areas of Semantic Information Management, Machine Learning, Natural Language Processing, and Linked Data technologies. SWC's renowned PoolParty Semantic Suite software platform is used in large enterprises, Government Organizations, NPOs and NGOs around the globe to extract meaning from big data.
The pattern satisfiability is a fundamental problem for SPARQL. This paper provides a complete analysis of decidability/undecidability of satisfiability problems for SPARQL 1.1 patterns. A surprising result is the undecidability of satisfiability for SPARQL 1.1 patterns when only AND and MINUS are expressible. Also, it is shown that any fragment of SPARQL 1.1 without expressing both AND and MINUS is decidable. These results provide a guideline for future SPARQL query language design and implementation.
Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWLS and argues that, as a practical matter, the translation function cannot always be isolated in mediators.
EUROLAN, which has been held biennially since 1993, is one of the most significant European summer schools in the area of natural language processing. Each of the EUROLAN sessions has focused on an area of timely interest to researchers in the field; this year's EU-ROLAN involved students in tutorials and hands-on sessions concerned with semantic web technologies as applied to language processing, ontology creation and use, and consideration of the semantic web's potential and limitations. This year's school was organized by the Faculty of Computer Science at the A. I. Cuza University of Iasi, the Research Institute for Artificial Intelligence at the Romanian Academy in Bucharest, and the Department of Computer Science at Vassar College. It was the most successful in its 10-year history, with 119 registered participants from 23 countries. Hosted by the Romanian Academy, the most prestigious cultural and scientific institution in the country, the event was given significant attention in the media.
In numerous distributed environments, including today's World Wide Web, enterprise data management systems, large science projects, and the emerging semantic web, applications will inevitably use the information described by multiple ontologies and schemas. We organized the Workshop on Semantic Integration at the Second International Semantic Web Conference to bring together different communities working on the issues of enabling integration among different resources. The workshop generated a lot of interest and attracted more than 70 participants. Interoperability among applications depends critically on the ability to map between them. Semantic integration issues have now become a key bottleneck in the deployment of a wide variety of information management applications.
The emerging Semantic Web focuses on bringing knowledge representationlike capabilities to Web applications in a Web-friendly way. The ability to put knowledge on the Web, share it, and reuse it through standard Web mechanisms provides new and interesting challenges to artificial intelligence. In this paper, I explore the similarities and differences between the Semantic Web and traditional AI knowledge representation systems, and see if I can validate the analogy "The Semantic Web is to KR as the Web is to hypertext." The first comes from a tutorial on expert systems written by Robert Engelmore with Edward Feigenbaum in 1993. Because of the importance of knowledge in expert systems and because the current knowledge acquisition method is slow and tedious, much of the future of expert systems depends on breaking the knowledge acquisition bottleneck and in codifying and representing a large knowledge infrastructure.
In the past, many knowledge representation systems failed because they were too monolithic and didn't scale well, whereas other systems failed to have an impact because they were small and isolated. Along with this tradeoff in size, there is also a constant tension between the cost involved in building a larger community that can interoperate through common terms and the cost of the lack of interoperability. The semantic web offers a good compromise between these approaches as it achieves wide-scale communication and interoperability using finite effort and cost. The semantic web is a set of standards for knowledge representation and exchange that is aimed at providing interoperability across applications and organizations. We believe that the gathering success of this technology is not derived from the particular choice of syntax or of logic.