"A semantic network or net is a graphic notation for representing knowledge in patterns of interconnected nodes and arcs. Computer implementations of semantic networks were first developed for artificial intelligence and machine translation, but earlier versions have long been used in philosophy, psychology, and linguistics. What is common to all semantic networks is a declarative graphic representation that can be used either to represent knowledge or to support automated systems for reasoning about knowledge. Some versions are highly informal, but other versions are formally defined systems of logic. ...The oldest known semantic network was drawn in the 3rd century AD by the Greek philosopher Porphyry in his commentary on Aristotle's categories."
– from John F. Sowa, Semantic Networks, revised and extended version of article originally written for the Encyclopedia of Artificial Intelligence, edited by Stuart C. Shapiro, Wiley, 1987, second edition, 1992.
Best conceived of as a "company brain," this knowledge graph focuses on integrating an organization's assortment of people, skills, experiences, materials, essential company databases, and projects, which greatly improves its self-knowledge and thereby yields competitive advantage. Compiled from combing through myriad databases, including those for human resources, emails, and manifold other sources, this knowledge graph provides the foundation for a rapid, detailed assessment of what knowledge and skills a company has at its disposal--and their relation to one another. This graph is designed to create better services and is extremely specific to an organization's industry, line of business, and area of specialization. For example, Google's and Yahoo's search engine endeavors mandate that they collect knowledge about every entity or subject in the world, so they can offer the most relevant, revealing information to their users. LinkedIn's knowledge graph, on the other hand, details people's professions, resumes, and career opportunities.1 Again, the relationships between these nodes are paramount.
With the increasing speed of technological advancements, the pharmaceutical and healthcare industry needs to break up departmental data silos with Knowledge Graphs and AI to understand the value of their data. At PhUSE EU Connect 2018 we will introduce the innovative approach of PoolParty Semantic Suite to make the most out of your data with semantic data integration. Our partner Findwise, global experts in search-driven solutions for the pharmaceutical and healthcare industry, presents a series of four blog posts to help you understand how knowledge graphs and AI can leverage your data-driven innovation and improve healthcare outcome. We face grand societal challenges pinned down in the 17 UN sustainability goals and specifically number 3 Good Health and Well-being. Humans live a longer life, which shifts the population pyramid.
In our previous post of this blog post series about knowledge graphs and AI in the pharmaceutical and healthcare industry, you got an overview of the challenges knowledge-intensive organizations face to be able to support data-driven innovation and improve healthcare outcome. In this blog post, you will learn by the hand of a use case about how connecting your siloed departmental data with external authoritative resources will leverage the value of your content assets. Visit us at PhUSE EU Connect 2018, where we will introduce the innovative approach of PoolParty Semantic Suite to make the most out of your data with semantic data integration. Stay tuned for upcoming blog posts that will help you understand how knowledge graphs and AI can leverage your organization. Our partner Findwise, global experts in search-driven solutions for the pharmaceutical and healthcare industry, are bringing all their expertise in information management and knowledge engineering into this blog post series.
Many knowledge graph embedding methods operate on triples and are therefore implicitly limited by a very local view of the entire knowledge graph. We present a new framework MOHONE to effectively model higher order network effects in knowledge-graphs, thus enabling one to capture varying degrees of network connectivity (from the local to the global). Our framework is generic, explicitly models the network scale, and captures two different aspects of similarity in networks: (a) shared local neighborhood and (b) structural role-based similarity. First, we introduce methods that learn network representations of entities in the knowledge graph capturing these varied aspects of similarity. We then propose a fast, efficient method to incorporate the information captured by these network representations into existing knowledge graph embeddings. We show that our method consistently and significantly improves the performance on link prediction of several different knowledge-graph embedding methods including TRANSE, TRANSD, DISTMULT, and COMPLEX(by at least 4 points or 17% in some cases).
Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of $(subject, relation, object)$. Current KG completion models compel two-thirds of a triple provided (e.g., $subject$ and $relation$) to predict the remaining one. In this paper, we propose a new model, which uses a KG-specific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.
Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE's code is available on GitHub at https://github.com/Mehran-k/SimplE.
The polypharmacy side effect prediction problem considers cases in which two drugs taken individually do not result in a particular side effect; however, when the two drugs are taken in combination, the side effect manifests. In this work, we demonstrate that multi-relational knowledge graph completion achieves state-of-the-art results on the polypharmacy side effect prediction problem. Empirical results show that our approach is particularly effective when the protein targets of the drugs are well-characterized. In contrast to prior work, our approach provides more interpretable predictions and hypotheses for wet lab validation.
In recent years, DBpedia, Freebase, OpenCyc, Wikidata, and YAGO have been published as noteworthy large, cross-domain, and freely available knowledge graphs. Although extensively in use, these knowledge graphs are hard to compare against each other in a given setting. Thus, it is a challenge for researchers and developers to pick the best knowledge graph for their individual needs. In our recent survey, we devised and applied data quality criteria to the above-mentioned knowledge graphs. Furthermore, we proposed a framework for finding the most suitable knowledge graph for a given setting. With this paper we intend to ease the access to our in-depth survey by presenting simplified rules that map individual data quality requirements to specific knowledge graphs. However, this paper does not intend to replace our previously introduced decision-support framework. For an informed decision on which KG is best for you we still refer to our in-depth survey.
If you have any questions or comments about this work, please shoot me an email at firstname.lastname@example.org The BioGrakn project was originally published by Antonio Messina from the High Performance Computing and Networking Institute of the Italian National Research Council (ICAR-CNR). His paper, "BioGrakn: A Knowledge Graph-based Semantic Database for Biomedical Sciences", was published after the CISIS 2017 conference.
We can officially say this now, since Gartner included knowledge graphs in the 2018 hype cycle for emerging technologies. Though we did not have to wait for Gartner -- declaring this as the "Year of the Graph" was our opener for 2018. Like anyone active in the field, we see the opportunity, as well as the threat in this: With hype comes confusion. They have been for the last 20 years at least. Knowledge graphs, in their original definition and incarnation, have been about knowledge representation and reasoning.