bayesian neural
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos.
Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging
Ekmekci, Canberk, Cetin, Mujdat
Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York > Monroe County > Rochester (0.04)
- Asia > Middle East > Jordan (0.04)
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos.
Reviews: BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
This paper proposes a probabilistic framework that combines non-linear (convolutional) autoencoders with ARHMM's to model videos coming from neuroscience experiments. The authors then use these representations to build Bayesian decoders that can produce full-resolution frames based only on the neural recordings. I find the motivation of this paper - building tools to study the relationship between neural activity and behavior from a less reductionist approach - extremely valuable. I have, however, the following concerns: This work is very related to Wiltschko et al., the stronger difference being the use of nonlinear autoencoders instead of PCA. However, the difference between linear and non-linear AE in the reconstructions showed on the supplemental videos is not very noticeable. What are the units of MSE in Figure 2? How big is the improvement on decoding videos from neural data by using CAE as opposed to PCA in pixels?
Reviews: BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
I thank the authors for their submission. The paper presents a framework for analyzing behavioral videos coming by combining nonlinear autoencoders and ARHMM. I also thank the reviewers for their detailed and thoughtful comments and suggestions. The reviewers agree that the paper is well motivated and well written, however they also raise serious concerns about the quality and interpretability of the results. The reviewers make the following suggestions: 1. Please detail in the paper why using a nonlinear auto-encoder is important or beneficial, seeing as it qualitatively doesn't seem to make much of a difference in terms of performance.
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos.
Revisiting Bayesian Autoencoders with MCMC
Chandra, Rohitash, Jain, Mahir, Maharana, Manavendra, Krivitsky, Pavel N.
Bayes' theorem is used as foundation Autoencoders are a family of unsupervised learning methods for inference in Bayesian neural networks, and Markov that use neural network architectures and learning algorithms chain Monte Carlo (MCMC) sampling methods [25] are used to learn a lower-dimensional representation (encoding) for constructing the posterior distribution. Variational inference of the data, which can then be used to reconstruct a representation [26] is another way to approximate the posterior distribution, close to the original input. They thus facilitate dimensionality which approximates an intractable posterior distribution by a reduction for prediction and classification [1, 2], and have tractable one. This makes it particularly suited to large data been successfully applied to image classification [3, 4], face sets and models, and so it has been popular for autoencoders recognition [5, 6], geoscience and remote sensing [7], speechbased and neural networks [13, 27].
- Overview (0.67)
- Research Report (0.64)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
Batty, Eleanor, Whiteway, Matthew, Saxena, Shreya, Biderman, Dan, Abe, Taiga, Musall, Simon, Gillis, Winthrop, Markowitz, Jeffrey, Churchland, Anne, Cunningham, John P., Datta, Sandeep R., Linderman, Scott, Paninski, Liam
A fundamental goal of systems neuroscience is to understand the relationship between neural activity and behavior. Behavior has traditionally been characterized by low-dimensional, task-related variables such as movement speed or response times. More recently, there has been a growing interest in automated analysis of high-dimensional video data collected during experiments. Here we introduce a probabilistic framework for the analysis of behavioral video and neural activity. This framework provides tools for compression, segmentation, generation, and decoding of behavioral videos.
Differential Bayesian Neural Nets
Andreas, Look, Kandemir, Melih
Neural Ordinary Differential Equations (N-ODEs) are a powerful building block for learning systems, which extend residual networks to a continuous-time dynamical system. We propose a Bayesian version of N-ODEs that enables well-calibrated quantification of prediction uncertainty, while maintaining the expressive power of their deterministic counterpart. We assign Bayesian Neural Nets (BNNs) to both the drift and the diffusion terms of a Stochastic Differential Equation (SDE) that models the flow of the activation map in time. We infer the posterior on the BNN weights using a straightforward adaptation of Stochastic Gradient Langevin Dynamics (SGLD). We illustrate significantly improved stability on two synthetic time series prediction tasks and report better model fit on UCI regression benchmarks with our method when compared to its non-Bayesian counterpart.
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Surrogate-assisted parallel tempering for Bayesian neural learning
Chandra, Rohitash, Jain, Konark, Kapoor, Arpit
Parallel tempering addresses some of the drawbacks of canonical Markov Chain Monte-Carlo methods for Bayesian neural learning with the ability to utilize high performance computing. However, certain challenges remain given the large range of network parameters and big data. Surrogate-assisted optimization considers the estimation of an objective function for models given computational inefficiency or difficulty to obtain clear results. We address the inefficiency of parallel tempering for large-scale problems by combining parallel computing features with surrogate assisted estimation of likelihood function that describes the plausibility of a model parameter value, given specific observed data. In this paper, we present surrogate-assisted parallel tempering for Bayesian neural learning where the surrogates are used to estimate the likelihood. The estimation via the surrogate becomes useful rather than evaluating computationally expensive models that feature large number of parameters and datasets. Our results demonstrate that the methodology significantly lowers the computational cost while maintaining quality in decision making using Bayesian neural learning. The method has applications for a Bayesian inversion and uncertainty quantification for a broad range of numerical models.
- Oceania > Australia (0.14)
- North America > United States > California (0.14)
- Europe > United Kingdom (0.14)
- (2 more...)