Goto

Collaborating Authors

 knowledge graph


Embedding Logical Queries on Knowledge Graphs

Neural Information Processing Systems

Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict em what drugs are likely to target proteins involved with both diseases X and Y? -- a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.


Expanding Holographic Embeddings for Knowledge Completion

Neural Information Processing Systems

Neural models operating over structured spaces such as knowledge graphs require a continuous embedding of the discrete elements of this space (such as entities) as well as the relationships between them. Relational embeddings with high expressivity, however, have high model complexity, making them computationally difficult to train. We propose a new family of embeddings for knowledge graphs that interpolate between a method with high model complexity and one, namely Holographic embeddings (HolE), with low dimensionality and high training efficiency. This interpolation, termed HolEx, is achieved by concatenating several linearly perturbed copies of original HolE. We formally characterize the number of perturbed copies needed to provably recover the full entity-entity or entity-relation interaction matrix, leveraging ideas from Haar wavelets and compressed sensing.


Symbolic Graph Reasoning Meets Convolutions

Neural Information Processing Systems

Beyond local convolution networks, we explore how to harness various external human knowledge for endowing the networks with the capability of semantic global reasoning.




DA T ASHEET: MOTIVE

Neural Information Processing Systems

Please see the most updated version here . Was there a specific task in mind? Was there a specific gap that needed to be filled? The MOTI VE dataset was created to promote the development of new drug-target interaction (DTI) prediction models based on both, existing relationships between compounds and their protein targets, and the similarity of JUMP Cell Painting morphological features of perturbed cells [2].The MOTI VE dataset was created with the DTI task in mind, and addresses a lack of graph-based biological datasets with empirical node features. Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? This dataset was created by the Carpenter-Singh Lab in the Imaging Platform at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. What support was needed to make this dataset? If there is an associated grant, provide the name of the grantor and the grant name and number, or if it was supported by a company or government agency, give those details.) The authors gratefully acknowledge an internship from the Massachusetts Life Sciences Center (to ES).