Goto

Collaborating Authors

 dynamical environment



e449b9317dad920c0dd5ad0a2a2d5e49-Paper.pdf

Neural Information Processing Systems

In the natural sciences, physics has found great success by describing the universe in terms of symmetry preserving transformations. Inspired by this formalism, we propose a framework, built upon the theory of group representation, for learning representations of a dynamical environment structured around the transformations that generate its evolution. Experimentally, we learn the structure of explicitly symmetric environments without supervision from observational data generated by sequential interactions.


A neurally plausible model for online recognition and postdiction in a dynamical environment

Neural Information Processing Systems

Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment. It is suggested that the brain does so using the statistical structure provided by an internal model of how latent, causal factors produce the observed patterns. In dynamic environments, such integration often takes the form of \emph{postdiction}, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past. We show that, as in other uses of the DDC, an inferential model can be learnt efficiently using samples from an internal model of the world. Applied to stimuli used in the context of psychophysics experiments, the framework provides an online and plausible mechanism for inference, including postdictive effects.



Review for NeurIPS paper: Learning Disentangled Representations and Group Structure of Dynamical Environments

Neural Information Processing Systems

While the rebuttal did address some of my concerns, I can not raise further. Especially, since I would still like to see an experiment analysis added on a standard benchmark where the proposed method **fails** (perhaps this is the case for the promised experiments on 3D cars or 3D shapes, but this is not clear from the text). In this way, it should be easier for others to follow-up on this work. I also recognize the scalability issues of the proposed method as pointed out by R2 and R5, which I initially had not considered. I agree that this is an issue that should be discussed in the paper and ideally computational complexity is empirically analyzed. However, considering that the field of disentanglement is still rather nascent and mostly concerned with synthetic datasets and overengineered methods, I don't think this is reason for rejection or a lower score.


Reviews: A neurally plausible model for online recognition and postdiction in a dynamical environment

Neural Information Processing Systems

This paper addresses the problem of biologically-plausible perceptual inference in dynamical environments. In particular, it considers situations in which informative sensory information arrives delayed with respect to the underlying state and thus require'postdiction' to update the inference of past states given new sensory observations. The authors extend a previously published method for inference in graphical models (DDC-HM) to temporally extended encoding functions and test their model in three situation cases where postdiction is relevant. Overall, I find this work valuable and interesting. It could, however, be more clearly presented and provide some relevant comparisons with alternative models.


A neurally plausible model for online recognition and postdiction in a dynamical environment

Neural Information Processing Systems

Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment. It is suggested that the brain does so using the statistical structure provided by an internal model of how latent, causal factors produce the observed patterns. In dynamic environments, such integration often takes the form of \emph{postdiction}, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past.


Online Optimization and Learning in Uncertain Dynamical Environments with Performance Guarantees

Li, Dan, Fooladivanda, Dariush, Martinez, Sonia

arXiv.org Machine Learning

We propose a new framework to solve online optimization and learning problems in unknown and uncertain dynamical environments. This framework enables us to simultaneously learn the uncertain dynamical environment while making online decisions in a quantifiably robust manner. The main technical approach relies on the theory of distributional robust optimization that leverages adaptive probabilistic ambiguity sets. However, as defined, the ambiguity set usually leads to online intractable problems, and the first part of our work is directed to find reformulations in the form of online convex problems for two sub-classes of objective functions. To solve the resulting problems in the proposed framework, we further introduce an online version of the Nesterov accelerated-gradient algorithm. We determine how the proposed solution system achieves a probabilistic regret bound under certain conditions. Two applications illustrate the applicability of the proposed framework.


A neurally plausible model for online recognition and postdiction in a dynamical environment

Wenliang, Li Kevin, Sahani, Maneesh

Neural Information Processing Systems

Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment. It is suggested that the brain does so using the statistical structure provided by an internal model of how latent, causal factors produce the observed patterns. In dynamic environments, such integration often takes the form of \emph{postdiction}, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past.


Learning Group Structure and Disentangled Representations of Dynamical Environments

Quessard, Robin, Barrett, Thomas D., Clements, William R.

arXiv.org Machine Learning

Discovering the underlying structure of a dynamical environment involves learning representations that are interpretable and disentangled, which is a challenging task. In physics, interpretable representations of our universe and its underlying dynamics are formulated in terms of representations of groups of symmetry transformations. We propose a physics-inspired method, built upon the theory of group representation, that learns a representation of an environment structured around the transformations that generate its evolution. Experimentally, we learn the structure of explicitly symmetric environments without supervision while ensuring the interpretability of the representations. We show that the learned representations allow for accurate long-horizon predictions and further demonstrate a correlation between the quality of predictions and disentanglement in the latent space.