Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
–Neural Information Processing Systems
Understanding the fundamental principles behind the success of deep neural networks is one of the most important open questions in the current literature. To this end, we study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape. We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases. We then show that pathwise regularized training problems can be represented as an exact convex optimization problem. We further prove that the equivalent convex problem is regularized via a group sparsity inducing norm.
Neural Information Processing Systems
Jan-19-2025, 20:45:41 GMT
- Technology: