Linear Independence of Generalized Neurons and Related Functions
–arXiv.org Artificial Intelligence
The linear independence of neurons plays a significant role in theoretical analysis of neural networks. Specifically, given neurons $H_1, ..., H_n: \bR^N \times \bR^d \to \bR$, we are interested in the following question: when are $\{H_1(\theta_1, \cdot), ..., H_n(\theta_n, \cdot)\}$ are linearly independent as the parameters $\theta_1, ..., \theta_n$ of these functions vary over $\bR^N$. Previous works give a complete characterization of two-layer neurons without bias, for generic smooth activation functions. In this paper, we study the problem for neurons with arbitrary layers and widths, giving a simple but complete characterization for generic analytic activation functions.
arXiv.org Artificial Intelligence
Sep-22-2024
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- North America > United States
- New York (0.04)
- Asia > Myanmar
- Genre:
- Research Report (0.50)
- Technology: