Goto

Collaborating Authors

Berkeley Researchers Create Virtual Acrobat – Synced – Medium

#artificialintelligence

The Berkeley Artificial Intelligence Research (BAIR) Lab yesterday proposed DeepMimic, a Reinforcement Learning (RL) technique that enables simulated characters to regenerate highly dynamic physical movements learned from data collected from human subjects. BAIR is a top-tier research lab focused on computer vision, machine learning, natural language processing, and robotics. RL methods have been shown to be applicable to a diverse suite of robotic tasks, particularly motion control problems. A typical RL includes a policy function that consists of all action selections that machines can do, and a value function that returns a low or high reward each time a machine takes an action. The epoch-making Go computer AlphaGo produced by DeepMind is grounded on the same technique.


Machine learning for Java developers

#artificialintelligence

Self-driving cars, face detection software, and voice controlled speakers all are built on machine learning technologies and frameworks--and these are just the first wave.


DeepMind's Newest AI Programs Itself to Make All the Right Decisions

#artificialintelligence

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning. The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, it'd take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples.


What is My Data Worth?

#artificialintelligence

People give massive amounts of their personal data to companies every day and these data are used to generate tremendous business values. Some economists and politicians argue that people should be paid for their contributions--but the million-dollar question is: by how much? This article discusses methods proposed in our recent AISTATS and VLDB papers that attempt to answer this question in the machine learning context. This is joint work with David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Nick Hynes, Bo Li, Ce Zhang, Costas J. Spanos, and Dawn Song, as well as a collaborative effort between UC Berkeley, ETH Zurich, and UIUC. More information about the work in our group can be found here.


An Investigation of Quantum Deep Clustering Framework with Quantum Deep SVM & Convolutional Neural Network Feature Extractor

arXiv.org Artificial Intelligence

In this paper, we have proposed a deep quantum SVM formulation, and further demonstrated a quantum-clustering framework based on the quantum deep SVM formulation, deep convolutional neural networks, and quantum K-Means clustering. We have investigated the run time computational complexity of the proposed quantum deep clustering framework and compared with the possible classical implementation. Our investigation shows that the proposed quantum version of deep clustering formulation demonstrates a significant performance gain (exponential speed up gains in many sections) against the possible classical implementation. The proposed theoretical quantum deep clustering framework is also interesting & novel research towards the quantum-classical machine learning formulation to articulate the maximum performance.