Goto

Collaborating Authors

 sobolev space


Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework

Neural Information Processing Systems

Coded computing has emerged as a promising framework for tackling significant challenges in large-scale distributed computing, including the presence of slow, faulty, or compromised servers. In this approach, each worker node processes a combination of the data, rather than the raw data itself. The final result then is decoded from the collective outputs of the worker nodes. However, there is a significant gap between current coded computing approaches and the broader landscape of general distributed computing, particularly when it comes to machine learning workloads. To bridge this gap, we propose a novel foundation for coded computing, integrating the principles of learning theory, and developing a framework that seamlessly adapts with machine learning applications. In this framework, the objective is to find the encoder and decoder functions that minimize the loss function, defined as the mean squared error between the estimated and true values. Facilitating the search for the optimum decoding and functions, we show that the loss function can be upper-bounded by the summation of two terms: the generalization error of the decoding function and the training error of the encoding function. Focusing on the second-order Sobolev space, we then derive the optimal encoder and decoder.






Appendix A Convergence of with the hybrid loss

Neural Information Processing Systems

Before presenting the formal version of Theorem 4.1 and its proof, we introduce some preliminaries. As stated in Theorem 4.1, we assume that both the discriminator class Now we are ready to present a formal version of Theorem 4.1 as follows. By the triangle inequality and Eq.A.12, we obtain λ null By Eq.A.2, Eq.A.11, and Eq.A.14, we have d In this section, we prove Proposition 3.1. In this section, we will give a brief proof of Theorem 4.2, and show that the learning policy can find Suppose the stationary point of the Bellman equation w.r.t the production sample space In this section, we will give a brief proof of Theorem 4.3, and show the convergence of the learning First, we show the monotonic improvement of Q function of the iterated policy by CPED. The Gym-MuJoCo is a commonly used benchmark for offline RL task.


Harmful Overfitting in Sobolev Spaces

Karhadkar, Kedar, Sietsema, Alexander, Needell, Deanna, Montufar, Guido

arXiv.org Machine Learning

Motivated by recent work on benign overfitting in overparameterized machine learning, we study the generalization behavior of functions in Sobolev spaces $W^{k, p}(\mathbb{R}^d)$ that perfectly fit a noisy training data set. Under assumptions of label noise and sufficient regularity in the data distribution, we show that approximately norm-minimizing interpolators, which are canonical solutions selected by smoothness bias, exhibit harmful overfitting: even as the training sample size $n \to \infty$, the generalization error remains bounded below by a positive constant with high probability. Our results hold for arbitrary values of $p \in [1, \infty)$, in contrast to prior results studying the Hilbert space case ($p = 2$) using kernel methods. Our proof uses a geometric argument which identifies harmful neighborhoods of the training data using Sobolev inequalities.


Approximation with CNNs in Sobolev Space: with Applications to Classification

Neural Information Processing Systems

We derive a novel approximation error bound with explicit prefactor for Sobolev-regular functions using deep convolutional neural networks (CNNs). The bound is non-asymptotic in terms of the network depth and filter lengths, in a rather flexible way. For Sobolev-regular functions which can be embedded into the H\older space, the prefactor of our error bound depends on the ambient dimension polynomially instead of exponentially as in most existing results, which is of independent interest. We also establish a new approximation result when the target function is supported on an approximate lower-dimensional manifold. We apply our results to establish non-asymptotic excess risk bounds for classification using CNNs with convex surrogate losses, including the cross-entropy loss, the hinge loss (SVM), the logistic loss, the exponential loss and the least squares loss. We show that the classification methods with CNNs can circumvent the curse of dimensionality if input data is supported on a neighborhood of a low-dimensional manifold.