Review for NeurIPS paper: Curvature Regularization to Prevent Distortion in Graph Embedding
–Neural Information Processing Systems
Additional Feedback: * One philosophical question that comes to mind when reading the three observations in the Introduction and while going over the example in Figure 1 is the following: Could it be that the exact reason that the representation methods learn interesting patterns is the fact that they are allowed to twist and curve the space as required, in order to bring nodes that are far apart in terms of graph distance close together in terms of Euclidean distance? It is not obvious to me that constraining the optimization algorithm of this ability can only have positive outcome. There's a parallel to be drawn here with the kernel trick in Support Vector Machines, where we are allowed to embed the data in a higher dimension, where the classes become linearly separable. This way, the sum could be defined over (q', q'') \in \Gamma_{i, j} * The sectional curvature could have been more thoroughly introduced. Unfortunately, for a paper that is heavily based on geometric notions, there's a clear shortage of pictures.
Neural Information Processing Systems
Feb-7-2025, 22:59:26 GMT