Half-Layered Neural Networks
–arXiv.org Artificial Intelligence
We propose a ``half'' layer of hidden units that has some of its weights randomly set and some of them trained. A half unit is composed of two stages: First, it takes a weighted sum of its inputs with fixed random weights, and second, the total activation is multiplied and then translated using two modifiable weights, before the result is passed through a nonlinearity. The number of modifiable weights of each hidden unit is thus two and does not depend on the fan-in. We show how such half units can be used in the first or any later layer in a deep network, possibly following convolutional layers. Our experiments on MNIST and FashionMNIST data sets indicate the promise of half layers, where we can achieve reasonable accuracy with a reduced number of parameters due to the regularizing effect of the randomized connections.
arXiv.org Artificial Intelligence
Jun-6-2025
- Country:
- Asia > Middle East
- Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Europe > Middle East
- Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.65)
- Technology: