Appendix: Permutation-Invariant V ariational Autoencoder for Graph-Level Representation Learning
–Neural Information Processing Systems
Since we apply the row-wise softmax in Eq. (7), Each self attention layer was followed by a point-wise fully connected neural network with two layers (1024 hidden dim) and a residual connection. We set the graph embedding dimension to 64. We tried different weightings of reconstruction and permutation matrix penalty loss to maximize the reconstruction accuracy with a discretized permutation matrix, while enabling stable training. In section 4.1 we describe how distances in the graph embedding space One important property of the GED is its invariance to the node ordering of graphs that are compared. As discussed in section 2.2 (Key architectural properties), we carefully This is exactly what we would expect.
Neural Information Processing Systems
Aug-14-2025, 10:51:21 GMT
- Country:
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.05)
- Technology: