Learning Mixtures of Spherical Gaussians via Fourier Analysis
Chakraborty, Somnath, Narayanan, Hariharan
–arXiv.org Artificial Intelligence
Suppose that we are given independent, identically distributed samples $x_l$ from a mixture $μ$ of no more than $k$ of $d$-dimensional spherical gaussian distributions $μ_i$ with variance $1$, such that the minimum $\ell_2$ distance between two distinct centers $y_l$ and $y_j$ is greater than $\sqrt{d} Δ$ for some $c \leq Δ$, where $c\in (0,1)$ is a small positive universal constant. We develop a randomized algorithm that learns the centers $y_l$ of the gaussians, to within an $\ell_2$ distance of $δ< \frac{Δ\sqrt{d}}{2}$ and the weights $w_l$ to within $cw_{min}$ with probability greater than $1 - \exp(-k/c)$. The number of samples and the computational time is bounded above by $poly(k, d, \frac{1}δ)$. Such a bound on the sample and computational complexity was previously unknown when $ω(1) \leq d \leq O(\log k)$. When $d = O(1)$, this follows from work of Regev and Vijayaraghavan. These authors also show that the sample complexity of learning a random mixture of gaussians in a ball of radius $Θ(\sqrt{d})$ in $d$ dimensions, when $d$ is $Θ( \log k)$ is at least $poly(k, \frac{1}δ)$, showing that our result is tight in this case.
arXiv.org Artificial Intelligence
Sep-10-2025
- Country:
- Asia > India
- Maharashtra > Mumbai (0.04)
- North America > United States
- New York (0.05)
- Asia > India
- Genre:
- Research Report (0.70)
- Technology: