Goto

Collaborating Authors

 klogn





1 m ATUw

Neural Information Processing Systems

Ineachstep, A's output distribution is within ζ of the true distribution N(t,1;S). Consider a hypothetical sampling algorithm A0 in which A is run, and then the output is altered by rejection to match the true distribution.





0a3b6f64f0523984e51323fe53b8c504-AuthorFeedback.pdf

Neural Information Processing Systems

Aconcrete example ofthisisthe"Gaussian Gated Linear Networks" paper (which can7 be found on arXiv) that shows SOTA results on many regression problems. We agree that the continual learning problem is far more complex than captured by current14 standard datasets. ImageNet), but it's worth noting that (1) the two fields are solving very different problems, and (2) that even17 MNIST variants are sufficiently complextoclearly stratify the performance ofcompeting methods (the function of18 a challenge dataset). There is a slight misunderstanding regarding the asymptotic time complexity of the algorithm.42


Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals

Diakonikolas, Ilias, Kane, Daniel M., Ren, Lisheng

arXiv.org Artificial Intelligence

We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples $(\mathbf{x},y)$ from an unknown distribution on $\mathbb{R}^n \times \{ \pm 1\}$, whose marginal distribution on $\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT}$ is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.