Complexity of One-Dimensional ReLU DNNs
Kogan, Jonathan, Jananthan, Hayden, Kepner, Jeremy
Abstract--We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance. Deep Neural Networks (DNNs) with Rectified Linear Unit (ReLU) activation functions are piecewise-linear functions whose expressive power can be studied via the number of linear regions that they create [1]-[3]. However, achieving such approximations for complicated functions typically demands substantial computational resources. The Lottery Ticket Hypothesis states that we can often remove many connections while maintaining similar performance, motivating the study of sparse DNNs [7].
Dec-10-2025
- Country:
- North America > United States > Massachusetts (0.40)
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment (0.34)
- Technology: