Goto

Collaborating Authors

 toposrl


Supplementary material for TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning

Neural Information Processing Systems

Theorem 1. Minimizing the expected loss Suppose we have T -dimensional features. Anchor nodes serve as fixed reference points within a simplicial complex, anchoring its structure and providing stability. Furthermore, anchor nodes can also represent important entities. Figure S2: Comparison of TSNE plots of representations learned by various encoders. CCA-SSG methods can not capture higher-order information and show similar artifacts. For example, the two clusters on the bottom and one from the right (corresponding to classes 1,2,3) are students from the same year but in different divisions.


TopoSRL: Topology preserving self-supervised Simplicial Representation Learning

Neural Information Processing Systems

In this paper, we introduce $\texttt{TopoSRL}$, a novel self-supervised learning (SSL) method for simplicial complexes to effectively capture higher-order interactions and preserve topology in the learned representations.




TopoSRL: Topology Preserving Self-Supervised Simplicial Representation Learning

Neural Information Processing Systems

This paper proposes an SSL method for simplicial complex data that preserves topological and geometric information while learning representations. Although no existing studies focus on SSL for simplicial complex data, a closely related field of SSL for graph data has been extensively studied.


TopoSRL: Topology preserving self-supervised Simplicial Representation Learning

Neural Information Processing Systems

In this paper, we introduce \texttt{TopoSRL}, a novel self-supervised learning (SSL) method for simplicial complexes to effectively capture higher-order interactions and preserve topology in the learned representations. We propose a new simplicial augmentation technique that generates two views of the simplicial complex that enriches the representations while being efficient. Next, we propose a new simplicial contrastive loss function that contrasts the generated simplices to preserve local and global information present in the simplicial complexes. Extensive experimental results demonstrate the superior performance of \texttt{TopoSRL} compared to state-of-the-art graph SSL techniques and supervised simplicial neural models across various datasets corroborating the efficacy of \texttt{TopoSRL} in processing simplicial complex data in a self-supervised setting.