Entropy Regularized Optimal Transport Independence Criterion
Liu, Lang, Pal, Soumik, Harchaoui, Zaid
Statistical independence measures have been widely used in machine learning and statistics, ranging from independence component analysis (Bach and Jordan, 2002; Gretton et al., 2005) to causal inference (Pfister et al., 2018; Chakraborty and Zhang, 2019), and recently in self-supervised learning (Li et al., 2021) and representation learning (Ozair et al., 2019). Classical dependence measures such as Pearson's correlation coefficient, Spearman's ρ, and Kendall's τ (Hoeffding, 1948; Kruskal, 1958; Lehmann, 1966) focus on real-valued one dimensional random variables and thus are not suitable for high dimensional data; see also (Schweizer and Wolff, 1981; Nikitin, 1995). One popular choice of independence measures in high dimension is the Hilbert-Schmidt independence criterion (HSIC) (Gretton et al., 2005). This criterion was used to develop an independence test by Gretton et al. (2007b). Several extensions of HSIC are available, such as a relative dependency measure (Bounliphone et al., 2015) and a joint independence measure among multiple random elements (Pfister et al., 2018). Another choice is the distance covariance (dCov) of Székely et al. (2007).
Dec-30-2021
- Country:
- Asia > Middle East
- Jordan (0.24)
- Europe
- Switzerland > Basel-City
- Basel (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Switzerland > Basel-City
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Technology: