Goto

Collaborating Authors

 ksd






A.1 Proofs483 Hereweusuallyomitthe =ksuffixinA

Neural Information Processing Systems

For instance, in the synthetic example for E2ST model shown in Section 5.1,ˆβl can573 be estimated forβl since the edge, 2Star and triangle statistics are specified.




Optimizing Kernel Discrepancies via Subset Selection

Chen, Deyao, Clément, François, Doerr, Carola, Kirk, Nathan

arXiv.org Machine Learning

Kernel discrepancies are a powerful tool for analyzing worst-case errors in quasi-Monte Carlo (QMC) methods. Building on recent advances in optimizing such discrepancy measures, we extend the subset selection problem to the setting of kernel discrepancies, selecting an m-element subset from a large population of size $n \gg m$. We introduce a novel subset selection algorithm applicable to general kernel discrepancies to efficiently generate low-discrepancy samples from both the uniform distribution on the unit hypercube, the traditional setting of classical QMC, and from more general distributions $F$ with known density functions by employing the kernel Stein discrepancy. We also explore the relationship between the classical $L_2$ star discrepancy and its $L_\infty$ counterpart.


The Minimax Lower Bound of Kernel Stein Discrepancy Estimation

Cribeiro-Ramallo, Jose, Aich, Agnideep, Kalinke, Florian, Aich, Ashit Baran, Szabó, Zoltán

arXiv.org Machine Learning

Kernel Stein discrepancies (KSDs) have emerged as a powerful tool for quantifying goodness-of-fit over the last decade, featuring numerous successful applications. To the best of our knowledge, all existing KSD estimators with known rate achieve $\sqrt n$-convergence. In this work, we present two complementary results (with different proof strategies), establishing that the minimax lower bound of KSD estimation is $n^{-1/2}$ and settling the optimality of these estimators. Our first result focuses on KSD estimation on $\mathbb R^d$ with the Langevin-Stein operator; our explicit constant for the Gaussian kernel indicates that the difficulty of KSD estimation may increase exponentially with the dimensionality $d$. Our second result settles the minimax lower bound for KSD estimation on general domains.