perrone
The Ni1000: High Speed Parallel VLSI for Implementing Multilayer Perceptrons
In this paper we present a new version of the standard multilayer perceptron (MLP) algorithm for the state-of-the-art in neural net(cid:173) work VLSI implementations: the Intel Ni1000. This new version of the MLP uses a fundamental property of high dimensional spaces which allows the 12-norm to be accurately approximated by the It -norm. This approach enables the standard MLP to utilize the parallel architecture of the Ni1000 to achieve on the order of 40000, 256-dimensional classifications per second. The Nestor/Intel radial basis function neural chip (Ni1000) contains the equivalent of 1024 256-dimensional artificial digital neurons and can perform at least 40000 classifications per second [Sullivan, 1993]. To attain this great speed, the Ni1000 was designed to calculate "city block" distances (Le. the II-norm) and thus to avoid the large number of multiplication units that would be required to calculate Euclidean dot products in parallel. Thus the Nil000 is ideally suited to perform both the RCE [Reillyet al., 1982] and PRCE [Scofield et al., 1987] algorithms or any of the other commonly used radial basis function (RBF) algorithms.
A Self-Driving Car Company Bets on Mall Shuttles and Monster Trucks
Like early mammals scuttering between the legs of tyrannosaurs, a lot of little companies are trying to weave around--and maybe even outlast--the big boys of self-driving technology. One such example is Perrone Robotics, a small Virginia company that has developed a self-driving package that it says can be quickly adapted to any vehicle. This Swiss Army knife of an AI can give smarts to an existing car, shuttle bus, or truck--even the gargantuan trucks used in mining. Tiny shuttles and behemoth trucks sell in small numbers, and equipping them to drive themselves is beneath the dignity of major players, like Alphabet's Waymo and General Motors' Cruise Automation. "What we're doing, certainly Waymo and GM Cruise could do, but they are focused on their own agenda. This is our niche, and we are going where we can add real value," says David Hofert, the chief marketing officer at Perrone Robotics.
- Oceania > Australia (0.05)
- North America > United States > Virginia > Albemarle County (0.05)
- North America > United States > Michigan (0.05)
- (4 more...)
- Transportation > Ground > Road (1.00)
- Materials > Metals & Mining (0.74)
- Information Technology > Robotics & Automation (0.73)
- (2 more...)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
ABSTRACT The separation of generalization error into two types, bias and variance (Geman, Bienenstock, Doursat, 1992), leads to the notion of error reduction by averaging over a "committee" of classifiers (Perrone, 1993). Committee perfonnance decreases with both the average error of the constituent classifiers and increases with the degree to which the misclassifications are correlated across the committee. Here, a method for reducing correlations is introduced, that uses a winner-take-all procedure similar to competitive learning to drive the individual networks to different minima in weight space with respect to the training set, such that correlations in generalization perfonnance will be reduced, thereby reducing committee error. 1 INTRODUCTION The problem of constructing a predictor can generally be viewed as finding the right combination of bias and variance (Geman, Bienenstock, Doursat, 1992) to reduce the expected error. Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance. Examples of such methods are Optimal Brain Damage (Le Cun et.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance. Examples of such methods are Optimal Brain Damage (Le Cun et.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Putting It All Together: Methods for Combining Neural Networks
In solving these tasks, one is faced with a large variety of learning algorithms and a vast selection of possible network architectures. After all the training, how does one know which is the best network? This decision is further complicated by the fact that standard techniques can be severely limited by problems such as over-fitting, data sparsity and local optima. The usual solution to these problems is a winner-take-all cross-validatory model selection. However, recent experimental and theoretical work indicates that we can improve performance by considering methods for combining neural networks.
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > Middle East > Jordan (0.05)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.97)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Putting It All Together: Methods for Combining Neural Networks
The past several years have seen a tremendous growth in the complexity of the recognition, estimation and control tasks expected of neural networks. In solving these tasks, one is faced with a large variety of learning algorithms and a vast selection of possible network architectures. After all the training, how does one know which is the best network? This decision is further complicated by the fact that standard techniques can be severely limited by problems such as over-fitting, data sparsity and local optima. The usual solution to these problems is a winner-take-all cross-validatory model selection.
- North America > United States > Rhode Island > Providence County > Providence (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > Middle East > Jordan (0.05)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.97)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)