Export Reviews, Discussions, Author Feedback and Meta-Reviews
–Neural Information Processing Systems
In our work, we provided a complete answer to this fundamental question by improving uniform convergence rate from O( S \sqrt{log(m)/m}) to O(\sqrt{log S }/\sqrt{m}). While it appears that we have only shaved the extra log(m) term to obtain an optimal rate, the dependence on S is improved from linear to logarithmic. As discussed in detail in Remark 1(ii, iii), the logarithmic dependence on S is optimal and it ensures that the uniform convergence guarantee can be obtained not just over fixed compact set S but over entire R d by growing S to infinity at an exponential rate, i.e., S_m o(e m) rather than at a sublinear rate, i.e., S_m o(\sqrt{m/log(m)}). In other words, for the same approximation error, the kernel can be approximated uniformly over a significantly larger S than it was considered in the literature.
Neural Information Processing Systems
Feb-8-2025, 03:23:37 GMT
- Technology: