Random matrix theory improved Fr\'echet mean of symmetric positive definite matrices
Bouchard, Florent, Mian, Ammar, Tiomoko, Malik, Ginolhac, Guillaume, Pascal, Frédéric
This mean is used, for example, for nearest centroid (Tuzel et al., 2008), pooling in SPD In this study, we consider the realm of covariance deep learning networks (Brooks et al., 2019) and metric matrices in machine learning, particularly focusing learning (Zadeh et al., 2016). The optimal solution is not on computing Fréchet means on the manifold available analytically necessitating the use of iterative algorithms of symmetric positive definite matrices, commonly often based on deriving a Riemannian gradient referred to as Karcher or geometric means. (Boumal, 2023). These algorithms are grounded in Riemannian Such means are leveraged in numerous machine geometry, since matrices belong to specific manifolds learning tasks. Relying on advanced statistical depending on their specific properties (fair SPD, low rank, tools, we introduce a random matrix theory based etc.) and the chosen metric. The geometry is often the classical method that estimates Fréchet means, which is one given for SPD matrices, but alternatives geometries particularly beneficial when dealing with low are available to perform this algorithm such as Bures-sample support and a high number of matrices Wassertein (Han et al., 2021), log-Euclidean (Utpala et al., to average.
Jun-5-2024