Certifying Robustness via Topological Representations

Agerberg, Jens, Guidolin, Andrea, Martinelli, Andrea, Hoefgeest, Pepijn Roos, Eklund, David, Scolamiero, Martina

arXiv.org Machine Learning 

In machine learning, the ability to obtain data representations that capture underlying geometrical and topological structures of data spaces is crucial. A common approach in Topological Data Analysis to extract multi-scale intrinsic geometric properties of data is persistent homology (PH) (Carlsson, 2009). As a rich descriptor of geometry, PH has been used in machine learning pipelines in areas such as bioinformatics, neuroscience and material science (Dindin et al., 2020; Colombo et al., 2022; Lee et al., 2017). The key difference of PH compared to other methods in Geometric Deep Learning is perhaps the emphasis of theoretical stability results: PH is a Lipschitz function, with known Lipschitz constants, with respect to appropriate metrics on data and representation space (Cohen-Steiner et al., 2005; Skraba and Turner, 2020). However, composing the PH pipeline with a neural network presents challenges with respect to the stability of the representations thus learned: they may lose stability or the stability may become insignificant in practice in case PH representations are composed with neural networks that have large Lipschitz constants. Moreover, the constant of the neural network may be difficult to compute or to control. While robustness to noise of PH-machine learning pipelines has been studied empirically (Turkeš et al., 2021), we formulate the problem in the framework of adversarial learning and propose a neural network that can learn stable and discriminative geometric representations from persistence. Our contributions may be summarized as follows: We propose the Stable Rank Network (SRN), a neural network architecture taking PH as input, where the learned representations enjoy a Lipschitz property w.r.t.