Goto

Collaborating Authors

 Supervised Learning


Improved Guarantees for Fully Dynamic k -Center Clustering with Outliers in General Metric Spaces

Neural Information Processing Systems

The metric k -center clustering problem with z outliers, also known as (k,z) -center clustering, involves clustering a given point set P in a metric space (M,d) using at most k balls, minimizing the maximum ball radius while excluding up to z points from the clustering. This problem holds fundamental significance in various domains such as machine learning, data mining, and database systems.This paper addresses the fully dynamic version of the problem, where the point set undergoes continuous updates (insertions and deletions) over time. The objective is to maintain an approximate (k,z) -center clustering with efficient update times. We propose a novel fully dynamic algorithm that maintains a (4 \epsilon) -approximate solution to the (k,z) -center clustering problem that covers all but at most (1 \epsilon)z points at any time in the sequence with probability 1-k/e {\Omega(\log k)} . The algorithm achieves an expected amortized update time of \mathcal{O}(\epsilon {-2} k 6\log(k) \log(\Delta)), and is applicable to general metric spaces.


First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces

Neural Information Processing Systems

From optimal transport to robust dimensionality reduction, many machine learning applicationscan be cast into the min-max optimization problems over Riemannian manifolds. Though manymin-max algorithms have been analyzed in the Euclidean setting, it has been elusive how theseresults translate to the Riemannian case. Zhang et al. (2022) have recently identified that geodesic convexconcave Riemannian problems admit always Sion's saddle point solutions. Immediately, an importantquestion that arises is if a performance gap between the Riemannian and the optimal Euclidean spaceconvex concave algorithms is necessary. Our work is the first to answer the question in the negative:We prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate at alinear convergence rate at the geodesically strongly convex concave case, matching the euclidean one.Our results also extend to the stochastic or non-smooth case where RCEG & Riemanian gradientascent descent (RGDA) achieve respectively near-optimal convergence rates up to factors dependingon curvature of the manifold.


RGMDT: Return-Gap-Minimizing Decision Tree Extraction in Non-Euclidean Metric Space

Neural Information Processing Systems

Deep Reinforcement Learning (DRL) algorithms have achieved great success in solving many challenging tasks while their black-box nature hinders interpretability and real-world applicability, making it difficult for human experts to interpret and understand DRL policies. Existing works on interpretable reinforcement learning have shown promise in extracting decision tree (DT) based policies from DRL policies with most focus on the single-agent settings while prior attempts to introduce DT policies in multi-agent scenarios mainly focus on heuristic designs which do not provide any quantitative guarantees on the expected return.In this paper, we establish an upper bound on the return gap between the oracle expert policy and an optimal decision tree policy. This enables us to recast the DT extraction problem into a novel non-euclidean clustering problem over the local observation and action values space of each agent, with action values as cluster labels and the upper bound on the return gap as clustering loss.Both the algorithm and the upper bound are extended to multi-agent decentralized DT extractions by an iteratively-grow-DT procedure guided by an action-value function conditioned on the current DTs of other agents. Further, we propose the Return-Gap-Minimization Decision Tree (RGMDT) algorithm, which is a surprisingly simple design and is integrated with reinforcement learning through the utilization of a novel Regularized Information Maximization loss. Evaluations on tasks like D4RL show that RGMDT significantly outperforms heuristic DT-based baselines and can achieve nearly optimal returns under given DT complexity constraints (e.g., maximum number of DT nodes).


Multiplayer Information Asymmetric Bandits in Metric Spaces

arXiv.org Machine Learning

In recent years the information asymmetric Lipschitz bandits In this paper we studied the Lipschitz bandit problem applied to the multiplayer information asymmetric problem studied in \cite{chang2022online, chang2023optimal}. More specifically we consider information asymmetry in rewards, actions, or both. We adopt the CAB algorithm given in \cite{kleinberg2004nearly} which uses a fixed discretization to give regret bounds of the same order (in the dimension of the action) space in all 3 problem settings. We also adopt their zooming algorithm \cite{ kleinberg2008multi}which uses an adaptive discretization and apply it to information asymmetry in rewards and information asymmetry in actions.


Learning and Evaluating Hierarchical Feature Representations

arXiv.org Artificial Intelligence

Hierarchy-aware representations ensure that the semantically closer classes are mapped closer in the feature space, thereby reducing the severity of mistakes while enabling consistent coarse-level class predictions. Towards this end, we propose a novel framework, Hierarchical Composition of Orthogonal Subspaces (Hier-COS), which learns to map deep feature embeddings into a vector space that is, by design, consistent with the structure of a given taxonomy tree. Our approach augments neural network backbones with a simple transformation module that maps learned discriminative features to subspaces defined using a fixed orthogonal frame. This construction naturally improves the severity of mistakes and promotes hierarchical consistency. Furthermore, we highlight the fundamental limitations of existing hierarchical evaluation metrics popularly used by the vision community and introduce a preference-based metric, Hierarchically Ordered Preference Score (HOPS), to overcome these limitations. We benchmark our method on multiple large and challenging datasets having deep label hierarchies (ranging from 3 - 12 levels) and compare with several baselines and SOTA. Through extensive experiments, we demonstrate that Hier-COS achieves state-of-the-art hierarchical performance across all the datasets while simultaneously beating top-1 accuracy in all but one case. We also demonstrate the performance of a Vision Transformer (ViT) backbone and show that learning a transformation module alone can map the learned features from a pre-trained ViT to Hier-COS and yield substantial performance benefits.


Riemannian Metric Learning: Closer to You than You Imagine

arXiv.org Machine Learning

In recent decades, machine learning research has focused on developing vector-based representations for various types of data, including images, text, and time series [22]. Learning a meaningful representation space is a foundational task that accelerates research progress, as exemplified by the success of Large Language Models (LLMs) [182]. A complementary challenge is learning a distance function (defining a metric space) that encodes aspects of the data's internal structure. This task is known as distance metric learning, or simply metric learning [20]. Metric learning methods find applications in every field using algorithms relying on a distance such as the ubiquitous k-nearest neighbors classifier: Classification and clustering [195], recommendation systems [89], optimal transport [45], and dimension reduction [116, 186]. However, when using only a global distance, the set of available modeling tools to derive computational algorithms is limited and does not capture the intrinsic data structure. Hence, in this paper, we present a literature review of Riemannian metric learning, a generalization of metric learning that has recently demonstrated success across diverse applications, from causal inference [51, 59, 147] to generative modeling [100, 111, 170]. Unlike metric learning, Riemannian metric learning does not merely learn an embedding capturing distance information, but estimates a Riemannian metric characterizing distributions, curvature, and distances in the dataset, i.e. the Riemannian structure of the data.


Popular travel destination breaks annual tourism record, sets new goal of 60M visitors

FOX News

Visitors from far and wide have been traveling to Japan, with the country breaking a tourism record in 2024. Between Jan. 1 and Nov. 30, projections indicated that nearly 33.4 million travelers visited Japan, according to the country's government site. Nearly three million Americans visited the country in 2024. Hokuto Asano, first secretary at the Embassy of Japan, told Fox News Digital that the number of visitors last year ended up reaching 36 million. Yukiyoshi Noguchi, who is the counselor at the embassy, said 2024 was declared the "U.S.-Japan Tourism Year" by both governments.


Bandit and Delayed Feedback in Online Structured Prediction

arXiv.org Machine Learning

Online structured prediction is a task of sequentially predicting outputs with complex structures based on inputs and past observations, encompassing online classification. Recent studies showed that in the full information setup, we can achieve finite bounds on the surrogate regret, i.e., the extra target loss relative to the best possible surrogate loss. In practice, however, full information feedback is often unrealistic as it requires immediate access to the whole structure of complex outputs. Motivated by this, we propose algorithms that work with less demanding feedback, bandit and delayed feedback. For the bandit setting, using a standard inverse-weighted gradient estimator, we achieve a surrogate regret bound of $O(\sqrt{KT})$ for the time horizon $T$ and the size of the output set $K$. However, $K$ can be extremely large when outputs are highly complex, making this result less desirable. To address this, we propose an algorithm that achieves a surrogate regret bound of $O(T^{2/3})$, which is independent of $K$. This is enabled with a carefully designed pseudo-inverse matrix estimator. Furthermore, for the delayed full information feedback setup, we obtain a surrogate regret bound of $O(D^{2/3} T^{1/3})$ for the delay time $D$. We also provide algorithms for the delayed bandit feedback setup. Finally, we numerically evaluate the performance of the proposed algorithms in online classification with bandit feedback.


Graded Neural Networks

arXiv.org Artificial Intelligence

This paper presents a novel framework for graded neural networks (GNNs) built over graded vector spaces $\V_\w^n$, extending classical neural architectures by incorporating algebraic grading. Leveraging a coordinate-wise grading structure with scalar action $\lambda \star \x = (\lambda^{q_i} x_i)$, defined by a tuple $\w = (q_0, \ldots, q_{n-1})$, we introduce graded neurons, layers, activation functions, and loss functions that adapt to feature significance. Theoretical properties of graded spaces are established, followed by a comprehensive GNN design, addressing computational challenges like numerical stability and gradient scaling. Potential applications span machine learning and photonic systems, exemplified by high-speed laser-based implementations. This work offers a foundational step toward graded computation, unifying mathematical rigor with practical potential, with avenues for future empirical and hardware exploration.


BadTrack: A Poison-Only Backdoor Attack on Visual Object Tracking Yiwei Chen

Neural Information Processing Systems

Visual object tracking (VOT) is one of the most fundamental tasks in computer vision community. State-of-the-art VOT trackers extract positive and negative examples that are used to guide the tracker to distinguish the object from the background. In this paper, we show that this characteristic can be exploited to introduce new threats and hence propose a simple yet effective poison-only backdoor attack. To be specific, we poison a small part of the training data by attaching a predefined trigger pattern to the background region of each video frame, so that the trigger appears almost exclusively in the extracted negative examples. To the best of our knowledge, this is the first work that reveals the threat of poisononly backdoor attack on VOT trackers. We experimentally show that our backdoor attack can significantly degrade the performance of both two-stream Siamese and one-stream Transformer trackers on the poisoned data while gaining comparable performance with the benign trackers on the clean data.