Bruck, Jehoshua
Neural Computing with Small Weights
Siu, Kai-Yeung, Bruck, Jehoshua
An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. Here we address this issue by studying the tradeoffs between the depth and the size of weights in polynomial-size networks of linear threshold elements (LTEs). We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. To prove these results, we use tools from harmonic analysis of Boolean functions.
Neural Computing with Small Weights
Siu, Kai-Yeung, Bruck, Jehoshua
Kai-Yeung Siu Dept. of Electrical & Computer Engineering University of California, Irvine Irvine, CA 92717 Jehoshua Bruck IBM Research Division Almaden Research Center San Jose, CA 95120-6099 Abstract An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. To prove these results, we use tools from harmonic analysis of Boolean functions. Our technique is quite general, it provides insights to some other problems.
On the Power of Neural Networks for Solving Hard Problems
Bruck, Jehoshua, Goodman, Joseph W.
The neural network model is a discrete time system that can be represented by a weighted and undirected graph. There is a weight attached to each edge of the graph and a threshold value attached to each node (neuron) of the graph. American Institute of Physics 1988 138 Theorder of the network is the number of nodes in the corresponding graph.