Goto

Collaborating Authors

 experiment


No luck on Tinder? Scientists reveal why should REMOVE your best qualities from your dating profile - and opt for a story instead

Daily Mail - Science & tech

Pete Hegseth explodes at'Trump Derangement Syndrome' as he claims Iran war is an overwhelming success Pete Hegseth says world should thank Trump as US prepares to unleash'largest strike package' on Iran: Live updates RICHARD EDEN: Everything's going wrong for Harry and Meghan but the Royal Family are not laughing because they will have to take them back Dangerous virus with no treatment or cure is exploding across the US... now alarming new map reveals exactly who is at risk'There was just all this jam. We thought there'd be more to it': ALISON BOSHOFF reveals inside story of how'Meghan has been purged' by Netflix, truth about her'silencing' of Harry, and what the out-in-the-cold couple will do next... Trader Joe's vs Walmart: What your local store really does to your home value and the brand that could knock $17k off your house price Secret life of Heath Ledger's daughter Matilda: She's been hidden for 18 years - but now insiders finally tell of family'secrets'... whispers from ...


Higgs Boson breakthrough was UK triumph, but British physics faces 'catastrophic' cuts

BBC News

Higgs Boson breakthrough was UK triumph, but British physics faces'catastrophic' cuts When the Nobel Prize in Physics was announced in Stockholm in October 2013, the world was watching. Among the names read out was Prof Peter Higgs, the British theorist who, nearly half a century earlier, had predicted the existence of a particle believed to hold the cosmos together - the Higgs boson. The announcement, broadcast live from Sweden, was what many scientists had hoped for since a year earlier, when experiments at CERN had finally confirmed Higgs's theory by discovering the Higgs boson - hailed as one of the biggest discoveries in a generation. At the time Higgs, who has since passed away, said in a statement: I hope this recognition of fundamental science will help raise awareness of the value of blue-sky research. Blue-sky research asks questions to understand the universe, rather than design new products.


Preventing Gradient Explosions in Gated Recurrent Units

Neural Information Processing Systems

A gated recurrent unit (GRU) is a successful recurrent neural network architecture for time-series data. The GRU is typically trained using a gradient-based method, which is subject to the exploding gradient problem in which the gradient increases significantly. This problem is caused by an abrupt change in the dynamics of the GRU due to a small variation in the parameters. In this paper, we find a condition under which the dynamics of the GRU changes drastically and propose a learning method to address the exploding gradient problem. Our method constrains the dynamics of the GRU so that it does not drastically change. We evaluated our method in experiments on language modeling and polyphonic music modeling. Our experiments showed that our method can prevent the exploding gradient problem and improve modeling accuracy.


QMDP-Net: Deep Learning for Planning under Partial Observability

Neural Information Processing Systems

This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture. The QMDP-net is fully differentiable and allows for end-to-end training. We train a QMDP-net on different tasks so that it can generalize to new ones in the parameterized task set and "transfer" to other similar tasks beyond the set. In preliminary experiments, QMDP-net showed strong performance on several robotic tasks in simulation. Interestingly, while QMDP-net encodes the QMDP algorithm, it sometimes outperforms the QMDP algorithm in the experiments, as a result of end-to-end learning.


Train longer, generalize better: closing the generalization gap in large batch training of neural networks

Neural Information Processing Systems

Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the generalization gap phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase.


Hindsight Experience Replay

Neural Information Processing Systems

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI.


A multi-agent reinforcement learning model of common-pool resource appropriation

Neural Information Processing Systems

Humanity faces numerous problems of common-pool resource appropriation. This class of multi-agent social dilemma includes the problems of ensuring sustainable use of fresh water, common fisheries, grazing pastures, and irrigation systems. Abstract models of common-pool resource appropriation based on non-cooperative game theory predict that self-interested agents will generally fail to find socially positive equilibria---a phenomenon called the tragedy of the commons. However, in reality, human societies are sometimes able to discover and implement stable cooperative solutions. Decades of behavioral game theory research have sought to uncover aspects of human behavior that make this possible.


Probabilistic Matrix Factorization for Automated Machine Learning

Neural Information Processing Systems

In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines, which can include different data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we propose to solve this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Specifically, we use a probabilistic matrix factorization model to transfer knowledge across experiments performed in hundreds of different datasets and use an acquisition function to guide the exploration of the space of possible ML pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.


Post: Device Placement with Cross-Entropy Minimization and Proximal Policy Optimization

Neural Information Processing Systems

Training deep neural networks requires an exorbitant amount of computation resources, including a heterogeneous mix of GPU and CPU devices. It is critical to place operations in a neural network on these devices in an optimal way, so that the training process can complete within the shortest amount of time. The state-of-the-art uses reinforcement learning to learn placement skills by repeatedly performing Monte-Carlo experiments. However, due to its equal treatment of placement samples, we argue that there remains ample room for significant improvements. In this paper, we propose a new joint learning algorithm, called Post, that integrates cross-entropy minimization and proximal policy optimization to achieve theoretically guaranteed optimal efficiency. In order to incorporate the cross-entropy method as a sampling technique, we propose to represent placements using discrete probability distributions, which allows us to estimate an optimal probability mass by maximal likelihood estimation, a powerful tool with the best possible efficiency. We have implemented Post in the Google Cloud platform, and our extensive experiments with several popular neural network training benchmarks have demonstrated clear evidence of superior performance: with the same amount of learning time, it leads to placements that have training times up to 63.7% shorter over the state-of-the-art.


End-to-End Differentiable Physics for Learning and Control

Neural Information Processing Systems

We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observations by performing precise simulations, while achieves high sample efficiency. Specifically, in this paper we demonstrate how to perform backpropagation analytically through a physical simulator defined via a linear complementarity problem. Unlike traditional finite difference methods, such gradients can be computed analytically, which allows for greater flexibility of the engine. Through experiments in diverse domains, we highlight the system's ability to learn physical parameters from data, efficiently match and simulate observed visual behavior, and readily enable control via gradient-based planning methods. Code for the engine and experiments is included with the paper.