Universal approximation results for neural networks with non-polynomial activation function over non-compact domains
Neufeld, Ariel, Schmocker, Philipp
More precisely, by assuming that the activation function is non-polynomial, we derive universal approximation results for neural networks within function spaces over non-compact subsets of a Euclidean space, e.g., weighted spaces, L Furthermore, we provide some dimension-independent rates for approximating a function with sufficiently regular and integrable Fourier transform by neural networks with non-polynomial activation function. Inspired by the functionality of human brains, (artificial) neural networks have been discovered in the seminal work of McCulloch and Pitts (see [32]). Fundamentally, a neural network consists of nodes arranged in hierarchical layers, where the connections between adjacent layers transmit the data through the network and the nodes transform this information. In mathematical terms, a neural network can therefore be described as a concatenation of affine and non-affine functions. Nowadays, neural networks are successfully applied in the fields of image classification (see e.g.
Oct-23-2024
- Country:
- Europe
- North America > United States
- California (0.04)
- Massachusetts > Suffolk County
- Boston (0.27)
- New York > New York County
- New York City (0.14)
- Rhode Island > Providence County
- Providence (0.04)
- Genre:
- Research Report (0.49)
- Technology: