An in-depth look at approximation via deep and narrow neural networks
Dommel, Joris, Wegner, Sven A.
–arXiv.org Artificial Intelligence
In 2017, Hanin and Sellke showed that the class of arbitrarily deep, real-valued, feed-forward and ReLU-activated networks of width w forms a dense subset of the space of continuous functions on R^n, with respect to the topology of uniform convergence on compact sets, if and only if w>n holds. To show the necessity, a concrete counterexample function f:R^n->R was used. In this note we actually approximate this very f by neural networks in the two cases w=n and w=n+1 around the aforementioned threshold. We study how the approximation quality behaves if we vary the depth and what effect (spoiler alert: dying neurons) cause that behavior.
arXiv.org Artificial Intelligence
Oct-9-2025
- Country:
- Asia > Middle East
- Israel > Haifa District > Haifa (0.04)
- Europe
- Germany > Hamburg (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Canada > British Columbia
- Mexico > Quintana Roo
- Cancún (0.04)
- United States > California
- San Diego County > San Diego (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: