Goto

Collaborating Authors

 Mathematical & Statistical Methods


Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton

arXiv.org Machine Learning

A substantial body of work in machine learning (ML) and randomized numerical linear algebra (RandNLA) has exploited various sorts of random sketching methodologies, including random sampling and random projection, with much of the analysis using Johnson--Lindenstrauss and subspace embedding techniques. Recent studies have identified the issue of inversion bias -- the phenomenon that inverses of random sketches are not unbiased, despite the unbiasedness of the sketches themselves. This bias presents challenges for the use of random sketches in various ML pipelines, such as fast stochastic optimization, scalable statistical estimators, and distributed optimization. In the context of random projection, the inversion bias can be easily corrected for dense Gaussian projections (which are, however, too expensive for many applications). Recent work has shown how the inversion bias can be corrected for sparse sub-gaussian projections. In this paper, we show how the inversion bias can be corrected for random sampling methods, both uniform and non-uniform leverage-based, as well as for structured random projections, including those based on the Hadamard transform. Using these results, we establish problem-independent local convergence rates for sub-sampled Newton methods.


Towards Environment-Sensitive Molecular Inference via Mixed Integer Linear Programming

arXiv.org Artificial Intelligence

Traditional QSAR/QSPR and inverse QSAR/QSPR methods often assume that chemical properties are dictated by single molecules, overlooking the influence of molecular interactions and environmental factors. In this paper, we introduce a novel QSAR/QSPR framework that can capture the combined effects of multiple molecules (e.g., small molecules or polymers) and experimental conditions on property values. We design a feature function to integrate the information of multiple molecules and the environment. Specifically, for the property Flory-Huggins $\chi$-parameter, which characterizes the thermodynamic properties between the solute and the solvent, and varies in temperatures, we demonstrate through computational experimental results that our approach can achieve a competitively high learning performance compared to existing works on predicting $\chi$-parameter values, while inferring the solute polymers with up to 50 non-hydrogen atoms in their monomer forms in a relatively short time. A comparison study with the simulation software J-OCTA demonstrates that the polymers inferred by our methods are of high quality.


A Regularized Newton Method for Nonconvex Optimization with Global and Local Complexity Guarantees

arXiv.org Artificial Intelligence

We consider the problem of finding an $\epsilon$-stationary point of a nonconvex function with a Lipschitz continuous Hessian and propose a quadratic regularized Newton method incorporating a new class of regularizers constructed from the current and previous gradients. The method leverages a recently developed linear conjugate gradient approach with a negative curvature monitor to solve the regularized Newton equation. Notably, our algorithm is adaptive, requiring no prior knowledge of the Lipschitz constant of the Hessian, and achieves a global complexity of $O(\epsilon^{-\frac{3}{2}}) + \tilde O(1)$ in terms of the second-order oracle calls, and $\tilde O(\epsilon^{-\frac{7}{4}})$ for Hessian-vector products, respectively. Moreover, when the iterates converge to a point where the Hessian is positive definite, the method exhibits quadratic local convergence. Preliminary numerical results illustrate the competitiveness of our algorithm.


Algorithmic contiguity from low-degree conjecture and applications in correlated random graphs

arXiv.org Machine Learning

In this paper, assuming a natural strengthening of the low-degree conjecture, we provide evidence of computational hardness for two problems: (1) the (partial) matching recovery problem in the sparse correlated Erd\H{o}s-R\'enyi graphs $\mathcal G(n,q;\rho)$ when the edge-density $q=n^{-1+o(1)}$ and the correlation $\rho<\sqrt{\alpha}$ lies below the Otter's threshold, solving a remaining problem in \cite{DDL23+}; (2) the detection problem between the correlated sparse stochastic block model $\mathcal S(n,\tfrac{\lambda}{n};k,\epsilon;s)$ and a pair of independent stochastic block models $\mathcal S(n,\tfrac{\lambda s}{n};k,\epsilon)$ when $\epsilon^2 \lambda s<1$ lies below the Kesten-Stigum (KS) threshold and $s<\sqrt{\alpha}$ lies below the Otter's threshold, solving a remaining problem in \cite{CDGL24+}. One of the main ingredient in our proof is to derive certain forms of \emph{algorithmic contiguity} between two probability measures based on bounds on their low-degree advantage. To be more precise, consider the high-dimensional hypothesis testing problem between two probability measures $\mathbb{P}$ and $\mathbb{Q}$ based on the sample $\mathsf Y$. We show that if the low-degree advantage $\mathsf{Adv}_{\leq D} \big( \frac{\mathrm{d}\mathbb{P}}{\mathrm{d}\mathbb{Q}} \big)=O(1)$, then (assuming the low-degree conjecture) there is no efficient algorithm $\mathcal A$ such that $\mathbb{Q}(\mathcal A(\mathsf Y)=0)=1-o(1)$ and $\mathbb{P}(\mathcal A(\mathsf Y)=1)=\Omega(1)$. This framework provides a useful tool for performing reductions between different inference tasks.


Flow Matching: Markov Kernels, Stochastic Processes and Transport Plans

arXiv.org Artificial Intelligence

Among generative neural models, flow matching techniques stand out for their simple applicability and good scaling properties. Here, velocity fields of curves connecting a simple latent and a target distribution are learned. Then the corresponding ordinary differential equation can be used to sample from a target distribution, starting in samples from the latent one. This paper reviews from a mathematical point of view different techniques to learn the velocity fields of absolutely continuous curves in the Wasserstein geometry. We show how the velocity fields can be characterized and learned via i) transport plans (couplings) between latent and target distributions, ii) Markov kernels and iii) stochastic processes, where the latter two include the coupling approach, but are in general broader. Besides this main goal, we show how flow matching can be used for solving Bayesian inverse problems, where the definition of conditional Wasserstein distances plays a central role. Finally, we briefly address continuous normalizing flows and score matching techniques, which approach the learning of velocity fields of curves from other directions.


Decentralized Randomly Distributed Multi-agent Multi-armed Bandit with Heterogeneous Rewards

Neural Information Processing Systems

We study a decentralized multi-agent multi-armed bandit problem in which multiple clients are connected by time dependent random graphs provided by an environment. The reward distributions of each arm vary across clients and rewards are generated independently over time by an environment based on distributions that include both sub-exponential and sub-Gaussian distributions. Each client pulls an arm and communicates with neighbors based on the graph provided by the environment. The goal is to minimize the overall regret of the entire system through collaborations. To this end, we introduce a novel algorithmic framework, which first provides robust simulation methods for generating random graphs using rapidly mixing Markov chains or the random graph model, and then combines an averaging-based consensus approach with a newly proposed weighting technique and the upper confidence bound to deliver a UCB-type solution. Our algorithms account for the randomness in the graphs, removing the conventional doubly stochasticity assumption, and only require the knowledge of the number of clients at initialization. We derive optimal instance-dependent regret upper bounds of order log T in both sub-Gaussian and sub-exponential environments, and a nearly optimal mean-gap independent regret upper bound of order T log T up to a log T factor. Importantly, our regret bounds hold with high probability and capture graph randomness, whereas prior works consider expected regret under assumptions and require more stringent reward distributions.


Sharp Analysis of Stochastic Optimization under Global Kurdyka-Łojasiewicz Inequality

Neural Information Processing Systems

We study the complexity of finding the global solution to stochastic nonconvex optimization when the objective function satisfies global Kurdyka-Łojasiewicz (KŁ) inequality and the queries from stochastic gradient oracles satisfy mild expected smoothness assumption. We first introduce a general framework to analyze Stochastic Gradient Descent (SGD) and its associated nonlinear dynamics under the setting.


Necessary and Sufficient Geometries for Gradient Methods

Neural Information Processing Systems

We study the impact of the constraint set and gradient geometry on the convergence of online and stochastic methods for convex optimization, providing a characterization of the geometries for which stochastic gradient and adaptive gradient methods are (minimax) optimal. In particular, we show that when the constraint set is quadratically convex, diagonally pre-conditioned stochastic gradient methods are minimax optimal.


Giga-scale Kernel Matrix-Vector Multiplication on GPU

Neural Information Processing Systems

Kernel matrix-vector multiplication (KMVM) is a foundational operation in machine learning and scientific computing. However, as KMVM tends to scale quadratically in both memory and time, applications are often limited by these computational constraints.


Finding Bipartite Components in Hypergraphs Supplementary Material

Neural Information Processing Systems

In this section we prove Theorem 2. After giving some additional preliminaries, and discussing the rules of the diffusion process, we will construct a linear program which can compute the rate of change r satisfying the rules of the diffusion process. We then give a complete analysis of the new linear program which establishes Theorem 2. A.1 Additional preliminaries Given a hypergraph H = (V A.2 Counter-example showing that rule (2) is needed First, we recall the rules which the rate of change of the diffusion process must satisfy. One might have expected that Rules (0) and (1) together would define a unique process. In such a scenario, either {u, w} or {v, w} can participate in the diffusion and satisfy Rule (0), which makes the process not uniquely defined and so we introduce Rule (2) to ensure that there will be a unique vector r which satisfies the rules. A.3 Computing r by a linear program Now we present an algorithm that computes the vector r = df Next we study every equivalence class U U in turn, and will set the r-value of the vertices in U recursively.