Goto

Collaborating Authors

 projection direction


Supplementary Material Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation Yingyi Chen

Neural Information Processing Systems

Comments on Theorem 3.2 With the primal problem in (6) in the paper, Theorem 3.2 provides Additionally, [27] presents the optimization w.r.t. a single projection direction in Therefore, our KSVD is more general in the data setups. Remark 3.3, we show that the values can be regarded as playing the role of the dual variables Using data-dependent projection weights does not affect the derivation of the shifted eigenvalue problem in the dual. With the derivations of the primal-dual optimization problems above, the primal-dual model representation of our KSVD problem can be provided correspondingly. Lemma 4.2 evaluates the objective value Moreover, as in the proof of Theorem 3.2, we note that the regularization coefficient This section provides the implementation details of all experiments included in the paper. This will be illustrated in details in the following.Algorithm 1 Learning with Primal-AttentionRequire: X:= [ x UEA Time Series The UEA time series benchmark [31] consists of 30 datasets. Following the setup in [11], we select 10 datasets for evaluation.



Large-scale optimal transport map estimation using projection pursuit

Neural Information Processing Systems

This paper studies the estimation of large-scale optimal transport maps (OTM), which is a well known challenging problem owing to the curse of dimensionality. Existing literature approximates the large-scale OTM by a series of one-dimensional OTM problems through iterative random projection. Such methods, however, suffer from slow or none convergence in practice due to the nature of randomly selected projection directions. Instead, we propose an estimation method of large-scale OTM by combining the idea of projection pursuit regression and sufficient dimension reduction. The proposed method, named projection pursuit Monge map (PPMM), adaptively selects the most informative'' projection direction in each iteration.


Minimax-Optimal Two-Sample Test with Sliced Wasserstein

Tran, Binh Thuan, Schreuder, Nicolas

arXiv.org Machine Learning

We study the problem of nonparametric two-sample testing using the sliced Wasserstein (SW) distance. While prior theoretical and empirical work indicates that the SW distance offers a promising balance between strong statistical guarantees and computational efficiency, its theoretical foundations for hypothesis testing remain limited. We address this gap by proposing a permutation-based SW test and analyzing its performance. The test inherits finite-sample Type I error control from the permutation principle. Moreover, we establish non-asymptotic power bounds and show that the procedure achieves the minimax separation rate $n^{-1/2}$ over multinomial and bounded-support alternatives, matching the optimal guarantees of kernel-based tests while building on the geometric foundations of Wasserstein distances. Our analysis further quantifies the trade-off between the number of projections and statistical power. Finally, numerical experiments demonstrate that the test combines finite-sample validity with competitive power and scalability, and -- unlike kernel-based tests, which require careful kernel tuning -- it performs consistently well across all scenarios we consider.


Supplementary Material Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation Yingyi Chen

Neural Information Processing Systems

Comments on Theorem 3.2 With the primal problem in (6) in the paper, Theorem 3.2 provides Additionally, [27] presents the optimization w.r.t. a single projection direction in Therefore, our KSVD is more general in the data setups. Remark 3.3, we show that the values can be regarded as playing the role of the dual variables Using data-dependent projection weights does not affect the derivation of the shifted eigenvalue problem in the dual. With the derivations of the primal-dual optimization problems above, the primal-dual model representation of our KSVD problem can be provided correspondingly. Lemma 4.2 evaluates the objective value Moreover, as in the proof of Theorem 3.2, we note that the regularization coefficient This section provides the implementation details of all experiments included in the paper. This will be illustrated in details in the following.Algorithm 1 Learning with Primal-AttentionRequire: X:= [ x UEA Time Series The UEA time series benchmark [31] consists of 30 datasets. Following the setup in [11], we select 10 datasets for evaluation.




Topology-Inspired Morphological Descriptor for Soft Continuum Robots

Wu, Zhiwei, Wei, Siyi, Luo, Jiahao, Zhang, Jinhui

arXiv.org Artificial Intelligence

This paper presents a topology-inspired morphological descriptor for soft continuum robots by combining a pseudo-rigid-body (PRB) model with Morse theory to achieve a quantitative characterization of robot morphologies. By counting critical points of directional projections, the proposed descriptor enables a discrete representation of multimodal configurations and facilitates morphological classification. Furthermore, we apply the descriptor to morphology control by formulating the target configuration as an optimization problem to compute actuation parameters that generate equilibrium shapes with desired topological features. The proposed framework provides a unified methodology for quantitative morphology description, classification, and control of soft continuum robots, with the potential to enhance their precision and adaptability in medical applications such as minimally invasive surgery and endovascular interventions.


Towards Better Spherical Sliced-Wasserstein Distance Learning with Data-Adaptive Discriminative Projection Direction

Zhang, Hongliang, Chen, Shuo, Luo, Lei, Yang, Jian

arXiv.org Artificial Intelligence

Spherical Sliced-Wasserstein (SSW) has recently been proposed to measure the discrepancy between spherical data distributions in various fields, such as geology, medical domains, computer vision, and deep representation learning. However, in the original SSW, all projection directions are treated equally, which is too idealistic and cannot accurately reflect the importance of different projection directions for various data distributions. To address this issue, we propose a novel data-adaptive Discriminative Spherical Sliced-Wasserstein (DSSW) distance, which utilizes a projected energy function to determine the discriminative projection direction for SSW. In our new DSSW, we introduce two types of projected energy functions to generate the weights for projection directions with complete theoretical guarantees. The first type employs a non-parametric deterministic function that transforms the projected Wasserstein distance into its corresponding weight in each projection direction. This improves the performance of the original SSW distance with negligible additional computational overhead. The second type utilizes a neural network-induced function that learns the projection direction weight through a parameterized neural network based on data projections. This further enhances the performance of the original SSW distance with less extra computational overhead. Finally, we evaluate the performance of our proposed DSSW by comparing it with several state-of-the-art methods across a variety of machine learning tasks, including gradient flows, density estimation on real earth data, and self-supervised learning.


Large-scale optimal transport map estimation using projection pursuit

Neural Information Processing Systems

This paper studies the estimation of large-scale optimal transport maps (OTM), which is a well known challenging problem owing to the curse of dimensionality. Existing literature approximates the large-scale OTM by a series of one-dimensional OTM problems through iterative random projection. Such methods, however, suffer from slow or none convergence in practice due to the nature of randomly selected projection directions. Instead, we propose an estimation method of large-scale OTM by combining the idea of projection pursuit regression and sufficient dimension reduction. The proposed method, named projection pursuit Monge map (PPMM), adaptively selects the most informative'' projection direction in each iteration.