Goto

Collaborating Authors

 nbfnet



f6a673f09493afcd8b129a0bcf1cd5bc-Supplemental.pdf

Neural Information Processing Systems

Then, we show that traditional methods satisfy the semiring assumption and therefore can be solved by the generalized Bellman-Ford algorithm. It should be stressed that the generalized Bellman-Ford algorithm for path problems has been provedin[4],and not acontributionofthis paper. Due to the product definition of path representations, a path of length0 is equal to the multiplication identity 1 q. Similarly, a summation of no path is equal to thesummation identity 0 q. INDICATORis called |V| times, and a single call toINDICATORtakes O(d) time.



Inductive Transfer Learning for Graph-Based Recommenders

Grötschla, Florian, Trachsel, Elia, Lanzendörfer, Luca A., Wattenhofer, Roger

arXiv.org Artificial Intelligence

Graph-based recommender systems are commonly trained in transductive settings, which limits their applicability to new users, items, or datasets. We propose NBF-Rec, a graph-based recommendation model that supports inductive transfer learning across datasets with disjoint user and item sets. Unlike conventional embedding-based methods that require retraining for each domain, NBF-Rec computes node embeddings dynamically at inference time. We evaluate the method on seven real-world datasets spanning movies, music, e-commerce, and location check-ins. NBF-Rec achieves competitive performance in zero-shot settings, where no target domain data is used for training, and demonstrates further improvements through lightweight fine-tuning. These results show that inductive transfer is feasible in graph-based recommendation and that interaction-level message passing supports generalization across datasets without requiring aligned users or items.



A Path Formulations for Traditional Methods

Neural Information Processing Systems

Mathematically, the summation satisfies Commutative Property. Note semirings differ from natural arithmetic operators in two aspects. We prove Lemma 1 by induction. In practice, for link prediction we find it only takes a very small number of iterations (e.g., Here we prove the time complexity for NBFNet and other GNN frameworks.



A*Net: A Scalable Path-based Reasoning Approach for Knowledge Graphs

Zhu, Zhaocheng, Yuan, Xinyu, Galkin, Mikhail, Xhonneux, Sophie, Zhang, Ming, Gazeau, Maxime, Tang, Jian

arXiv.org Artificial Intelligence

Reasoning on large-scale knowledge graphs has been long dominated by embedding methods. While path-based methods possess the inductive capacity that embeddings lack, their scalability is limited by the exponential number of paths. Here we present A*Net, a scalable path-based method for knowledge graph reasoning. Inspired by the A* algorithm for shortest path problems, our A*Net learns a priority function to select important nodes and edges at each iteration, to reduce time and memory footprint for both training and inference. The ratio of selected nodes and edges can be specified to trade off between performance and efficiency. Experiments on both transductive and inductive knowledge graph reasoning benchmarks show that A*Net achieves competitive performance with existing state-of-the-art path-based methods, while merely visiting 10% nodes and 10% edges at each iteration. On a million-scale dataset ogbl-wikikg2, A*Net not only achieves a new state-of-the-art result, but also converges faster than embedding methods. A*Net is the first path-based method for knowledge graph reasoning at such scale.


Ensembling Textual and Structure-Based Models for Knowledge Graph Completion

Nandi, Ananjan, Kaur, Navdeep, Singla, Parag, Mausam, null

arXiv.org Artificial Intelligence

We consider two popular approaches to Knowledge Graph Completion (KGC): textual models that rely on textual entity descriptions, and structure-based models that exploit the connectivity structure of the Knowledge Graph (KG). Preliminary experiments show that these approaches have complementary strengths: structure-based models perform well when the gold answer is easily reachable from the query head in the KG, while textual models exploit descriptions to give good performance even when the gold answer is not reachable. In response, we explore ensembling as a way of combining the best of both approaches. We propose a novel method for learning query-dependent ensemble weights by using the distributions of scores assigned by individual models to all candidate entities. Our ensemble baseline achieves state-of-the-art results on three standard KGC datasets, with up to 6.8 pt MRR and 8.3 pt Hits@1 gains over best individual models.


Distance-Based Propagation for Efficient Knowledge Graph Reasoning

Shomer, Harry, Ma, Yao, Li, Juanhui, Wu, Bo, Aggarwal, Charu C., Tang, Jiliang

arXiv.org Artificial Intelligence

Knowledge graph completion (KGC) aims to predict unseen edges in knowledge graphs (KGs), resulting in the discovery of new facts. A new class of methods have been proposed to tackle this problem by aggregating path information. These methods have shown tremendous ability in the task of KGC. However they are plagued by efficiency issues. Though there are a few recent attempts to address this through learnable path pruning, they often sacrifice the performance to gain efficiency. In this work, we identify two intrinsic limitations of these methods that affect the efficiency and representation quality. To address the limitations, we introduce a new method, TAGNet, which is able to efficiently propagate information. This is achieved by only aggregating paths in a fixed window for each source-target pair. We demonstrate that the complexity of TAGNet is independent of the number of layers. Extensive experiments demonstrate that TAGNet can cut down on the number of propagated messages by as much as 90% while achieving competitive performance on multiple KG datasets. The code is available at https://github.com/HarryShomer/TAGNet.