The Capacity of a Bump
–Neural Information Processing Systems
Recently, several researchers have reported encouraging experimental results whenusing Gaussian or bump-like activation functions in multilayer perceptrons. Networks of this type usually require fewer hidden layers and units and often learn much faster than typical sigmoidal networks. To explain these results we consider a hyper-ridge network, which is a simple perceptron with no hidden units and a rid¥e activation function. If we are interested in partitioningp points in d dimensions into two classes then in the limit as d approaches infinity the capacity of a hyper-ridge and a perceptron is identical.
Neural Information Processing Systems
Dec-31-1996
- Country:
- North America > United States > Maryland > Prince George's County > College Park (0.15)
- Technology: