Collaborating Authors


The Semantic Zoo - Smart Data Hubs, Knowledge Graphs and Data Catalogs


Sometimes, you can enter into a technology too early. The groundwork for semantics was laid down in the late 1990s and early 2000s, with Tim Berners-Lee's stellar Semantic Web article, debuting in Scientific American in 2004, seen by many as the movement's birth. Yet many early participants in the field of semantics discovered a harsh reality: computer systems were too slow to handle the intense indexing requirements the technology needed, the original specifications and APIs failed to handle important edge cases, and, perhaps most importantly, the number of real world use cases where semantics made sense were simply not at a large enough scope; they could easily be met by existing approaches and technology. Semantics faded around 2008, echoing the pattern of the Artificial Intelligence Winter of the 1970s. JSON was all the rage, then mobile apps, big data came on the scene even as Javascript underwent a radical transformation, and all of a sudden everyone wanted to be a data scientist (until they discovered the fact that data science was mostly math).

Relational Learning Analysis of Social Politics using Knowledge Graph Embedding Artificial Intelligence

Knowledge Graphs (KGs) have gained considerable attention recently from both academia and industry. In fact, incorporating graph technology and the copious of various graph datasets have led the research community to build sophisticated graph analytics tools. Therefore, the application of KGs has extended to tackle a plethora of real-life problems in dissimilar domains. Despite the abundance of the currently proliferated generic KGs, there is a vital need to construct domain-specific KGs. Further, quality and credibility should be assimilated in the process of constructing and augmenting KGs, particularly those propagated from mixed-quality resources such as social media data. This paper presents a novel credibility domain-based KG Embedding framework. This framework involves capturing a fusion of data obtained from heterogeneous resources into a formal KG representation depicted by a domain ontology. The proposed approach makes use of various knowledge-based repositories to enrich the semantics of the textual contents, thereby facilitating the interoperability of information. The proposed framework also embodies a credibility module to ensure data quality and trustworthiness. The constructed KG is then embedded in a low-dimension semantically-continuous space using several embedding techniques. The utility of the constructed KG and its embeddings is demonstrated and substantiated on link prediction, clustering, and visualisation tasks.

Semantic Web and Semantic Technology Trends in 2018 - DATAVERSITY


There have been some exciting developments of late in the Semantic Web and Technology space. Semantic Technology trends in 2018 will continue to advance many of the trends discussed in 2017 and build upon a number of new changes just entering the marketplace.

What IBM, the Semantic Web Company, and Siemens are doing with semantic technologies


The Semantics conference is one of the biggest events for all things semantics. Key research and industry players gathered this week in Leipzig to showcase and discuss, and we were there to get that vibe. Graphs are everywhere: we have social graphs and knowledge graphs and office graphs, and in the minds of most these have been associated with Facebook and Google and Microsoft. But the concept of Knowledge Graphs is broader and vendor-agnostic. All graphs can be considered as knowledge graphs, insofar as they represent information by means of nodes and (directional) edges.

An End-to-End Conversational Second Screen Application for TV Program Discovery

AI Magazine

In this article, we report on a multiphase R&D effort to develop a conversational second screen application for TV program discovery. Our goal is to share with the community the breadth of artificial intelligence (AI) and natural language (NL) technologies required to develop such an application along with learnings from target end-users. We first give an overview of our application from the perspective of the end-user. We then present the architecture of our application along with the main AI and NL components, which were developed over multiple phases. The first phase focuses on enabling core functionality such as effectively finding programs matching the user’s intent. The second phase focuses on enabling dialog with the user. Finally, we present two user studies, corresponding to these two phases. The results from both studies demonstrate the effectiveness of our application in the target domain.

Semantic Advertising Artificial Intelligence

We present the concept of Semantic Advertising which we see as the future of online advertising. Semantic Advertising is online advertising powered by semantic technology which essentially enables us to represent and reason with concepts and the meaning of things. This paper aims to 1) Define semantic advertising, 2) Place it in the context of broader and more widely used concepts such as the Semantic Web and Semantic Search, 3) Provide a survey of work in related areas such as context matching, and 4) Provide a perspective on successful emerging technologies and areas of future work. We base our work on our experience as a company developing semantic technologies aimed at realizing the full potential of online advertising.

Mining Meaning from Wikipedia Artificial Intelligence

Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.