Goto

Collaborating Authors

 enabling hyperparameter optimization


Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

Neural Information Processing Systems

Continuing advances in neural interfaces have enabled simultaneous monitoring of spiking activity from hundreds to thousands of neurons. To interpret these large-scale data, several methods have been proposed to infer latent dynamic structure from high-dimensional datasets. One recent line of work uses recurrent neural networks in a sequential autoencoder (SAE) framework to uncover dynamics. SAEs are an appealing option for modeling nonlinear dynamical systems, and enable a precise link between neural activity and behavior on a single-trial basis. However, the very large parameter count and complexity of SAEs relative to other models has caused concern that SAEs may only perform well on very large training sets. We hypothesized that with a method to systematically optimize hyperparameters (HPs), SAEs might perform well even in cases of limited training data.


Reviews: Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

Neural Information Processing Systems

Authors provide novel approaches to calculate cross-validated reconstruction loss by using one of two proposed solutions: Sample validation and Coordinated dropout described above. The ideas are first described with the help of a synthetically generated dataset and experimental results on Monk This paper would be much stronger if the ideas were demonstrated on multiple real datasets. In the current organization, the ideas are first demonstrated on synthetically generated data. It is not clear why the "Monkey J Maze" is not used right from the beginning, instead of spending significant portion of the data in describing the synthetic data generation process. Synthetic data is unconvincing especially in an unsupervised learning setting.


Reviews: Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

Neural Information Processing Systems

The paper demonstrates that the sequential autoencoder that's becoming popular in neuroscience is prone to overfitting and propose solutions to address this overfitting. It is overall a good paper.


Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

Neural Information Processing Systems

Continuing advances in neural interfaces have enabled simultaneous monitoring of spiking activity from hundreds to thousands of neurons. To interpret these large-scale data, several methods have been proposed to infer latent dynamic structure from high-dimensional datasets. One recent line of work uses recurrent neural networks in a sequential autoencoder (SAE) framework to uncover dynamics. SAEs are an appealing option for modeling nonlinear dynamical systems, and enable a precise link between neural activity and behavior on a single-trial basis. However, the very large parameter count and complexity of SAEs relative to other models has caused concern that SAEs may only perform well on very large training sets. We hypothesized that with a method to systematically optimize hyperparameters (HPs), SAEs might perform well even in cases of limited training data.