klogn
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (4 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > Virginia (0.04)
- (2 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > Virginia (0.04)
- (3 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > Virginia (0.04)
- (3 more...)
0a3b6f64f0523984e51323fe53b8c504-AuthorFeedback.pdf
Aconcrete example ofthisisthe"Gaussian Gated Linear Networks" paper (which can7 be found on arXiv) that shows SOTA results on many regression problems. We agree that the continual learning problem is far more complex than captured by current14 standard datasets. ImageNet), but it's worth noting that (1) the two fields are solving very different problems, and (2) that even17 MNIST variants are sufficiently complextoclearly stratify the performance ofcompeting methods (the function of18 a challenge dataset). There is a slight misunderstanding regarding the asymptotic time complexity of the algorithm.42
Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals
Diakonikolas, Ilias, Kane, Daniel M., Ren, Lisheng
We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples $(\mathbf{x},y)$ from an unknown distribution on $\mathbb{R}^n \times \{ \pm 1\}$, whose marginal distribution on $\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT}$ is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)