Generalization Abilities of Cascade Network Architecture
In [5], a new incremental cascade network architecture has been presented. This paper discusses the properties of such cascade networks and investigates their generalization abilities under the particular constraint of small data sets. The evaluation is done for cascade networks consisting of local linear maps using the Mackey Glass time series prediction task as a benchmark. Our results indicate thatto bring the potential of large networks to bear on the problem of extracting information from small data sets without running therisk of overjitting, deeply cascaded network architectures are more favorable than shallow broad architectures that contain the same number of nodes. 1 Introduction For many real-world applications, a major constraint for the successful learning from examples is the limited number of examples available. Thus, methods are required, that can learn from small data sets.
Computation of Heading Direction from Optic Flow in Visual Cortex
Lappe, Markus, Rauschecker, Josef P.
We have designed a neural network which detects the direction of egomotion fromoptic flow in the presence of eye movements (Lappe and Rauschecker, 1993). The performance of the network is consistent with human psychophysical data, and its output neurons show great similarity to "triple component" cells in area MSTd of monkey visual cortex. We now show that by using assumptions about the kind of eye movements that the obsenrer is likely to perform, our model can generate various other cell types found in MSTd as well.
Using hippocampal 'place cells' for navigation, exploiting phase coding
Burgess, Neil, O', Keefe, John, Recce, Michael
These are compared with single unit recordings and behavioural data. The firing of CAl place cells is simulated as the (artificial) rat moves in an environment. Thisis the input for a neuronal network whose output, at each theta (0) cycle, is the next direction of travel for the rat. Cells are characterised by the number of spikes fired and the time of firing with respect to hippocampal 0 rhythm. 'Learning' occurs in'on-off' synapses that are switched on by simultaneous pre-and post-synaptic activity.
Learning Spatio-Temporal Planning from a Dynamic Programming Teacher: Feed-Forward Neurocontrol for Moving Obstacle Avoidance
Fahner, Gerald, Eckmiller, Rolf
The action network is embedded in a sensorymotoric systemarchitecture that contains a separate world model. It is continuously fed with short-term predicted spatiotemporal obstacle trajectories, and receives robot state feedback. The action netallows for external switching between alternative planning tasks.It generates goal-directed motor actions - subject to the robot's kinematic and dynamic constraints - such that collisions withmoving obstacles are avoided. Using supervised learning, we distribute examples of the optimal planner mapping over a structure-level adapted parsimonious higher order network. The training database is generated by a Dynamic Programming algorithm. Extensivesimulations reveal, that the local planner mapping is highly nonlinear, but can be effectively and sparsely represented bythe chosen powerful net model. Excellent generalization occurs for unseen obstacle configurations. We also discuss the limitations offeed-forward neurocontrol for growing planning horizons.
Transient Signal Detection with Neural Networks: The Search for the Desired Signal
Prรญncipe, Josรฉ Carlos, Zahalka, Abir
Matched filtering has been one of the most powerful techniques employed for transient detection. Here we will show that a dynamic neural network outperforms the conventional approach. When the artificial neural network (ANN) is trained with supervised learning schemes there is a need to supply the desired signal for all time, although we are only interested in detecting the transient. In this paper we also show the effects on the detection agreement of different strategies to construct the desired signal. The extension of the Bayes decision rule (011 desired signal), optimal in static classification, performs worse than desired signals constructed by random noise or prediction during the background. 1 INTRODUCTION Detection of poorly defined waveshapes in a nonstationary high noise background is an important and difficult problem in signal processing.
A Method for Learning From Hints
We address the problem of learning an unknown function by pu tting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning fromexamples to learning from hints. A canonical representation ofhints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis ofeach hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use.
Learning Curves, Model Selection and Complexity of Neural Networks
Murata, Noboru, Yoshizawa, Shuji, Amari, Shun-ichi
Learning curves show how a neural network is improved as the number of t.raiuing examples increases and how it is related to the network complexity. The present paper clarifies asymptotic properties and their relation of t.wo learning curves, one concerning the predictive loss or generalization loss and the other the training loss. The result gives a natural definition of the complexity of a neural network. Moreover, it provides a new criterion of model selection.