Review for NeurIPS paper: Fast geometric learning with symbolic matrices
–Neural Information Processing Systems
Relation to Prior Work: The authors both discuss the implementation differences with and compare the performance of their library to strong baselines in many different application areas. Their results are impressive, especially given that some of the baselines are heavily optimized for specific problems (e.g. I'm wondering if PyTorch-Geometric's main competitor DGL should be an additional comparison point for the geometric deep learning benchmarks; I think it's often faster in practice although it may be too specialized for these architectures. I would like to see more discussion of the similarities and differences between your implementation and deep learning compilers like XLA and TVM. For instance, does your package do just-in-time CUDA code generation/compilation or perform operator fusion?
Neural Information Processing Systems
Jan-27-2025, 07:52:27 GMT
- Technology: