transe
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > Ventura County > Oxnard (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (3 more...)
gHAWK: Local and Global Structure Encoding for Scalable Training of Graph Neural Networks on Knowledge Graphs
Sabir, Humera, Farooq, Fatima, Aboulnaga, Ashraf
Knowledge Graphs (KGs) are a rich source of structured, heterogeneous data, powering a wide range of applications. A common approach to leverage this data is to train a graph neural network (GNN) on the KG. However, existing message-passing GNNs struggle to scale to large KGs because they rely on the iterative message passing process to learn the graph structure, which is inefficient, especially under mini-batch training, where a node sees only a partial view of its neighborhood. In this paper, we address this problem and present gHAWK, a novel and scalable GNN training framework for large KGs. The key idea is to precompute structural features for each node that capture its local and global structure before GNN training even begins. Specifically, gHAWK introduces a preprocessing step that computes: (a)~Bloom filters to compactly encode local neighborhood structure, and (b)~TransE embeddings to represent each node's global position in the graph. These features are then fused with any domain-specific features (e.g., text embeddings), producing a node feature vector that can be incorporated into any GNN technique. By augmenting message-passing training with structural priors, gHAWK significantly reduces memory usage, accelerates convergence, and improves model accuracy. Extensive experiments on large datasets from the Open Graph Benchmark (OGB) demonstrate that gHAWK achieves state-of-the-art accuracy and lower training time on both node property prediction and link prediction tasks, topping the OGB leaderboard for three graphs.
- North America > United States > Texas (0.04)
- North America > United States > Oklahoma (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
SKGE: Spherical Knowledge Graph Embedding with Geometric Regularization
Quan, Xuan-Truong, Quan, Xuan-Son, Minh, Duc Do, Van, Vinh Nguyen
Knowledge graph embedding (KGE) has become a fundamental technique for representation learning on multi-relational data. Many seminal models, such as TransE, operate in an unbounded Euclidean space, which presents inherent limitations in modeling complex relations and can lead to inefficient training. In this paper, we propose Spherical Knowledge Graph Embedding (SKGE), a model that challenges this paradigm by constraining entity representations to a compact manifold: a hypersphere. SKGE employs a learnable, non-linear Spherization Layer to map entities onto the sphere and interprets relations as a hybrid translate-then-project transformation. Through extensive experiments on three benchmark datasets, FB15k-237, CoDEx-S, and CoDEx-M, we demonstrate that SKGE consistently and significantly outperforms its strong Euclidean counterpart, TransE, particularly on large-scale benchmarks such as FB15k-237 and CoDEx-M, demonstrating the efficacy of the spherical geometric prior. We provide an in-depth analysis to reveal the sources of this advantage, showing that this geometric constraint acts as a powerful regularizer, leading to comprehensive performance gains across all relation types. More fundamentally, we prove that the spherical geometry creates an "inherently hard negative sampling" environment, naturally eliminating trivial negatives and forcing the model to learn more robust and semantically coherent representations. Our findings compellingly demonstrate that the choice of manifold is not merely an implementation detail but a fundamental design principle, advocating for geometric priors as a cornerstone for designing the next generation of powerful and stable KGE models.
- North America > Canada (0.14)
- North America > United States (0.04)
- Europe > Germany (0.04)
- (7 more...)
1cecc7a77928ca8133fa24680a88d2f9-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors propose a simple and scalable approach to modeling multi-relational data using low-dimensional vector embeddings of entities, with the relationships between embeddings captured using offset vectors. The embeddings are learned by training a margin-based ranking model to score the observed entity1,relationship,entity2 triples higher than the unobserved ones. Though the proposed model can be seen as a special case of several existing models (e.g. The approach is well motivated and clearly described. The empirical evaluation is reasonably well done, but the write up could be better.
OntoAligner Meets Knowledge Graph Embedding Aligners
Giglou, Hamed Babaei, D'Souza, Jennifer, Auer, Sören, Sanaei, Mahsa
Ontology Alignment (OA) is essential for enabling semantic interoperability across heterogeneous knowledge systems. While recent advances have focused on large language models (LLMs) for capturing contextual semantics, this work revisits the underexplored potential of Knowledge Graph Embedding (KGE) models, which offer scalable, structure-aware representations well-suited to ontology-based tasks. Despite their effectiveness in link prediction, KGE methods remain underutilized in OA, with most prior work focusing narrowly on a few models. To address this gap, we reformulate OA as a link prediction problem over merged ontologies represented as RDF-style triples and develop a modular framework, integrated into the OntoAligner library, that supports 17 diverse KGE models. The system learns embeddings from a combined ontology and aligns entities by computing cosine similarity between their representations. We evaluate our approach using standard metrics across seven benchmark datasets spanning five domains: Anatomy, Biodiversity, Circular Economy, Material Science and Engineering, and Biomedical Machine Learning. Two key findings emerge: first, KGE models like ConvE and TransF consistently produce high-precision alignments, outperforming traditional systems in structure-rich and multi-relational domains; second, while their recall is moderate, this conservatism makes KGEs well-suited for scenarios demanding high-confidence mappings. Unlike LLM-based methods that excel at contextual reasoning, KGEs directly preserve and exploit ontology structure, offering a complementary and computationally efficient strategy. These results highlight the promise of embedding-based OA and open pathways for further work on hybrid models and adaptive strategies.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Iran > East Azerbaijan Province > Tabriz (0.04)
- Europe > Germany > Lower Saxony > Hanover (0.04)
- (2 more...)
Supplementary Material of Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs Ruijie Wang
The supplementary material is structured as follows: Section A.1 gives the proof and analysis of Theorem 3.1; Section A.2 introduces the datasets and their statistics in detail; Section A.3 introduces the baselines utilized in experiments; Section A.4 discusses the experimental setup of baseline models as well as MetaTKGR; Section A.5 reports detailed experiment performance with statistical test results; A.1 Statements, Proof and Analysis of Theorem 3.1 Thus, we can improve the generalization ability of our meta-learner over time by the following update step by step, A.2 Datasets Figure 1: Number of entities over time. New entities continuously emerge on three public TKGs. Integrated Crisis Early Warning System (ICEWS18) is the collection of coded interactions between 3 socio-political actors which are extracted from news articles. Y AGO). Figure 1 shows the amount of new entities appearing over time. Figure 2 shows the corresponding distributions.
- North America > United States > Illinois > Champaign County > Champaign (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
A Survey on Knowledge Graph Structure and Knowledge Graph Embeddings
Sardina, Jeffrey, Kelleher, John D., O'Sullivan, Declan
Knowledge Graphs (KGs) and their machine learning counterpart, Knowledge Graph Embedding Models (KGEMs), have seen ever-increasing use in a wide variety of academic and applied settings. In particular, KGEMs are typically applied to KGs to solve the link prediction task; i.e. to predict new facts in the domain of a KG based on existing, observed facts. While this approach has been shown substantial power in many end-use cases, it remains incompletely characterised in terms of how KGEMs react differently to KG structure. This is of particular concern in light of recent studies showing that KG structure can be a significant source of bias as well as partially determinant of overall KGEM performance. This paper seeks to address this gap in the state-of-the-art. This paper provides, to the authors' knowledge, the first comprehensive survey exploring established relationships of Knowledge Graph Embedding Models and Graph structure in the literature. It is the hope of the authors that this work will inspire further studies in this area, and contribute to a more holistic understanding of KGs, KGEMs, and the link prediction task.
- Overview (1.00)
- Research Report > New Finding (0.66)
Reviews: Poincaré Embeddings for Learning Hierarchical Representations
Summary The paper proposes a link prediction model that embeds symbols in a hyperbolic space using Poincaré embeddings. In this space, tree structures can more easily be represented as the distance to points increases exponentially w.r.t. The paper is motivated and written well. Furthermore, the presented method is intriguing and I believe it will have a notable impact on link prediction research. My concerns are regarding the comparison to state-of-the-art link prediction and how the method performs if the assumption about a hierarchy in the data is dropped.