Goto

Collaborating Authors

The Bottleneck Simulator: A Model-Based Deep Reinforcement Learning Approach

Journal of Artificial Intelligence Research

Deep reinforcement learning has recently shown many impressive successes. However, one major obstacle towards applying such methods to real-world problems is their lack of data-efficiency. To this end, we propose the Bottleneck Simulator: a model-based reinforcement learning method which combines a learned, factorized transition model of the environment with rollout simulations to learn an effective policy from few examples. The learned transition model employs an abstract, discrete (bottleneck) state, which increases sample efficiency by reducing the number of model parameters and by exploiting structural properties of the environment. We provide a mathematical analysis of the Bottleneck Simulator in terms of fixed points of the learned policy, which reveals how performance is affected by four distinct sources of error: an error related to the abstract space structure, an error related to the transition model estimation variance, an error related to the transition model estimation bias, and an error related to the transition model class bias. Finally, we evaluate the Bottleneck Simulator on two natural language processing tasks: a text adventure game and a real-world, complex dialogue response selection task.


Deep Probabilistic Surrogate Networks for Universal Simulator Approximation

arXiv.org Machine Learning

We present a framework for automatically structuring and training fast, approximate, deep neural surrogates of existing stochastic simulators. Unlike traditional approaches to surrogate modeling, our surrogates retain the interpretable structure of the reference simulators. The particular way we achieve this allows us to replace the reference simulator with the surrogate when undertaking amortized inference in the probabilistic programming sense. The fidelity and speed of our surrogates allow for not only faster "forward" stochastic simulation but also for accurate and substantially faster inference. We support these claims via experiments that involve a commercial composite-materials curing simulator. Employing our surrogate modeling technique makes inference an order of magnitude faster, opening up the possibility of doing simulator-based, non-invasive, just-in-time parts quality testing; in this case inferring safety-critical latent internal temperature profiles of composite materials undergoing curing from surface temperature profile measurements.


Practice your swing with this golf simulator on sale

Mashable

Golfers around the world all have one thing in common: They despise rain. The weather can really put a damper on a round of golf, especially when you've waited all winter to get back out on the course. That's where the PhiGolf home golf simulator can save the day. Successfully funded on Indiegogo with over $200,000, the PhiGolf simulator and swing stick allow you to play golf all year round, no matter the weather, from practically anywhere. It's kind of like a video game, except the swing stick trainer is your controller.


On the Difference Between the Information Bottleneck and the Deep Information Bottleneck

arXiv.org Machine Learning

Combining the Information Bottleneck model with deep learning by replacing mutual information terms with deep neural nets has proved successful in areas ranging from generative modelling to interpreting deep neural networks. In this paper, we revisit the Deep Variational Information Bottleneck and the assumptions needed for its derivation. The two assumed properties of the data $X$, $Y$ and their latent representation $T$ take the form of two Markov chains $T-X-Y$ and $X-T-Y$. Requiring both to hold during the optimisation process can be limiting for the set of potential joint distributions $P(X,Y,T)$. We therefore show how to circumvent this limitation by optimising a lower bound for $I(T;Y)$ for which only the latter Markov chain has to be satisfied. The actual mutual information consists of the lower bound which is optimised in DVIB and cognate models in practice and of two terms measuring how much the former requirement $T-X-Y$ is violated. Finally, we propose to interpret the family of information bottleneck models as directed graphical models and show that in this framework the original and deep information bottlenecks are special cases of a fundamental IB model.


r/MachineLearning - [D] Is Reinforcement Learning Practical?

#artificialintelligence

Is reinforcement learning practical at this point for industry work? The most prominent examples we see are from DeepMind (AlphaStar, AlphaGo), but the team are world-class researchers (over 40 of them) who also worked closely with expert Starcraft 2 players with a ton of computing resources. As someone who hasn't had much experience in RL, I see potential applications but am unsure of the amount of work or practicality of it. For example, one potential application for RL is to learn fraudulent behavior in an online retailer system (i.e. Amazon, EBay) and proactively find methods of fraud before they happen.