Goto

Collaborating Authors

 Country


Time Warping Invariant Neural Networks

Neural Information Processing Systems

We proposed a model of Time Warping Invariant Neural Networks (TWINN) to handle the time warped continuous signals. Although TWINN is a simple modification ofwell known recurrent neural network, analysis has shown that TWINN completely removestime warping and is able to handle difficult classification problem. It is also shown that TWINN has certain advantages over the current available sequential processing schemes: Dynamic Programming(DP)[I], Hidden Markov Model( HMM)[2], Time Delayed Neural Networks(TDNN) [3] and Neural Network Finite Automata(NNFA)[4]. Wealso analyzed the time continuity employed in TWINN and pointed out that this kind of structure can memorize longer input history compared with Neural Network FiniteAutomata (NNFA). This may help to understand the well accepted fact that for learning grammatical reference with NNFA one had to start with very short strings in training set. The numerical example we used is a trajectory classification problem. This problem, making a feature of variable sampling rates, having internal states, continuous dynamics,heavily time-warped data and deformed phase space trajectories, is shown to be difficult to other schemes. With TWINN this problem has been learned in 100 iterations. For benchmark we also trained the exact same problem with TDNN and completely failed as expected.


Holographic Recurrent Networks

Neural Information Processing Systems

Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure.HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs andtrajectories through continuous spaee. The performance of HRNs is found to be superior to that of ordinary recurrent networks onthese sequence generation tasks. 1 INTRODUCTION The representation and processing of data with complex structure in neural networks remains a challenge. In a previous paper [Plate, 1991b] I described Holographic Reduced Representations(HRRs) which use circular-convolution associative-memory to embody sequential and recursive structure in fixed-width distributed representations. Thispaper introduces Holographic Recurrent Networks (HRNs), which are recurrent nets that incorporate these techniques for generating sequences of symbols or trajectories through continuous space.


On-Line Estimation of the Optimal Value Function: HJB- Estimators

Neural Information Processing Systems

In this paper, we discuss online estimation strategies that model the optimal value function of a typical optimal control problem. We present a general strategy that uses local corridor solutions obtained via dynamic programming to provide local optimal control sequencetraining data for a neural architecture model of the optimal value function.



Generalization Abilities of Cascade Network Architecture

Neural Information Processing Systems

In [5], a new incremental cascade network architecture has been presented. This paper discusses the properties of such cascade networks and investigates their generalization abilities under the particular constraint of small data sets. The evaluation is done for cascade networks consisting of local linear maps using the Mackey Glass time series prediction task as a benchmark. Our results indicate thatto bring the potential of large networks to bear on the problem of extracting information from small data sets without running therisk of overjitting, deeply cascaded network architectures are more favorable than shallow broad architectures that contain the same number of nodes. 1 Introduction For many real-world applications, a major constraint for the successful learning from examples is the limited number of examples available. Thus, methods are required, that can learn from small data sets.




Using hippocampal 'place cells' for navigation, exploiting phase coding

Neural Information Processing Systems

These are compared with single unit recordings and behavioural data. The firing of CAl place cells is simulated as the (artificial) rat moves in an environment. Thisis the input for a neuronal network whose output, at each theta (0) cycle, is the next direction of travel for the rat. Cells are characterised by the number of spikes fired and the time of firing with respect to hippocampal 0 rhythm. 'Learning' occurs in'on-off' synapses that are switched on by simultaneous pre-and post-synaptic activity.


Learning Spatio-Temporal Planning from a Dynamic Programming Teacher: Feed-Forward Neurocontrol for Moving Obstacle Avoidance

Neural Information Processing Systems

The action network is embedded in a sensorymotoric systemarchitecture that contains a separate world model. It is continuously fed with short-term predicted spatiotemporal obstacle trajectories, and receives robot state feedback. The action netallows for external switching between alternative planning tasks.It generates goal-directed motor actions - subject to the robot's kinematic and dynamic constraints - such that collisions withmoving obstacles are avoided. Using supervised learning, we distribute examples of the optimal planner mapping over a structure-level adapted parsimonious higher order network. The training database is generated by a Dynamic Programming algorithm. Extensivesimulations reveal, that the local planner mapping is highly nonlinear, but can be effectively and sparsely represented bythe chosen powerful net model. Excellent generalization occurs for unseen obstacle configurations. We also discuss the limitations offeed-forward neurocontrol for growing planning horizons.


Transient Signal Detection with Neural Networks: The Search for the Desired Signal

Neural Information Processing Systems

Matched filtering has been one of the most powerful techniques employed for transient detection. Here we will show that a dynamic neural network outperforms the conventional approach. When the artificial neural network (ANN) is trained with supervised learning schemes there is a need to supply the desired signal for all time, although we are only interested in detecting the transient. In this paper we also show the effects on the detection agreement of different strategies to construct the desired signal. The extension of the Bayes decision rule (011 desired signal), optimal in static classification, performs worse than desired signals constructed by random noise or prediction during the background. 1 INTRODUCTION Detection of poorly defined waveshapes in a nonstationary high noise background is an important and difficult problem in signal processing.