Goto

Collaborating Authors

 nullt








Proof of Theorem

Neural Information Processing Systems

To prove Theorem 1, we interpret graphon convolutions as generative models for graph convolutions. It is also possible to define graphon convolutions induced by graph convolutions. X,X null . 2 Therefore, it holds that λ Theorem 2 follows directly from Theorem 1 via the triangle inequality. Proof of Theorem 2. By the triangle inequality, we can bound null Y Error bars have been scaled by 1.5. The problem setup is as follows.


Metric-Free Individual Fairness in Online Learning

Neural Information Processing Systems

Our results resolve an open question by Gillen et al. (2018) by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the


Appendix

Neural Information Processing Systems

Section A provides a proof that isometry preserves angles. Section D lists the grid considered for hyper-parameters. T is an isometry iff it preserves inner products. Suppose T is an isometry. Conversely, if T preserves inner products, then nullT (v w),T ( v w) null = null v w,v w null, which implies null T ( v w)null = null v w null, and since T is linear, nullT (v) T ( w) null = null v w null .


Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold

Xu, Ke, Han, Yuefeng

arXiv.org Machine Learning

Recovering a low-CP-rank tensor from noisy linear measurements is a central challenge in high-dimensional data analysis, with applications spanning tensor PCA, tensor regression, and beyond. We exploit the intrinsic geometry of rank-one tensors by casting the recovery task as an optimization problem over the Segre manifold, the smooth Riemannian manifold of rank-one tensors. This geometric viewpoint yields two powerful algorithms: Riemannian Gradient Descent (RGD) and Riemannian Gauss-Newton (RGN), each of which preserves feasibility at every iteration. Under mild noise assumptions, we prove that RGD converges at a local linear rate, while RGN exhibits an initial local quadratic convergence phase that transitions to a linear rate as the iterates approach the statistical noise floor. Extensive synthetic experiments validate these convergence guarantees and demonstrate the practical effectiveness of our methods.