Goto

Collaborating Authors

 drnn





CORNN: Convex optimization of recurrent neural networks for rapid inference of neural dynamics

Neural Information Processing Systems

Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations in behaving animals. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). Performing this training in real-time could open doors for research techniques and medical applications to model and control interventions at single-cell resolution and drive desired forms of animal behavior. However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings, CORNN attained training speeds $\sim$100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation.


Deep Multi-State Dynamic Recurrent Neural Networks Operating on Wavelet Based Neural Features for Robust Brain Machine Interfaces

Neural Information Processing Systems

We present a new deep multi-state Dynamic Recurrent Neural Network (DRNN) architecture for Brain Machine Interface (BMI) applications. Our DRNN is used to predict Cartesian representation of a computer cursor movement kinematics from open-loop neural data recorded from the posterior parietal cortex (PPC) of a human subject in a BMI system. We design the algorithm to achieve a reasonable trade-off between performance and robustness, and we constrain memory usage in favor of future hardware implementation. We feed the predictions of the network back to the input to improve prediction performance and robustness. We apply a scheduled sampling approach to the model in order to solve a statistical distribution mismatch between the ground truth and predictions. Additionally, we configure a small DRNN to operate with a short history of input, reducing the required buffering of input data and number of memory accesses. This configuration lowers the expected power consumption in a neural network accelerator. Operating on wavelet-based neural features, we show that the average performance of DRNN surpasses other state-of-the-art methods in the literature on both single-and multi-day data recorded over 43 days. Results show that multi-state DRNN has the potential to model the nonlinear relationships between the neural data and kinematics for robust BMIs.



1ff8a7b5dc7a7d1f0ed65aaa29c04b1e-Reviews.html

Neural Information Processing Systems

We are currently contemplating better experiments to gauge the level of abstraction of each layer. One potential way is to sample optimal text sequences for individual node activations at different layers, and see whether these show higher levels of abstraction higher in the DRNN, but this is non-trivial due to the discrete nature of the input. Due to page restrictions (and time limitations) we will not be able to include this in the paper.



Significance difficulty of the We will add the following to The relies on neural data inputs not

Neural Information Processing Systems

ASIC implementations could offer substantial power and mobility benefits. Wavelet features show superior results for all the decoders (Example: Rebuttal Figure 1c). We will add the following to sec.2: Integrating both state and neural information in this way leads to smoother predictions (Figure 4a). (Zhang 2017).


Reviews: Deep Multi-State Dynamic Recurrent Neural Networks Operating on Wavelet Based Neural Features for Robust Brain Machine Interfaces

Neural Information Processing Systems

In this paper, the authors present a multi-state Dynamic Recurrent Neural Network architecture and training framework for Brain Machine Interface (BMI), including incorporating scheduled sampling and testing diverse neural features as input. The authors robustly analyze this model in comparison to other prior modeling frameworks on human posterior parietal cortical activity (PPC). This paper is of an impressive quality, containing rigorous and methodical analyses showing clear and significant improvements of their model. The authors compare to twelve baseline models and investigate many aspects of the modeling framework, including single-day vs multi-day performance, generalization of single-day training to other days, the reliance on amount of training data, the optimal preprocessing of neural feature inputs, and generalization of the models over time with different styles of retraining. The paper was very well-written, with most choices and details clearly explained.