ENVIRONMENT


5 Ways to Get Started with Reinforcement Learning – buZZrobot

@machinelearnbot

Machine learning algorithms, and neural networks in particular, are considered to be the cause of a new AI'revolution'. In this article I will introduce the concept of reinforcement learning but with limited technical details so that readers with a variety of backgrounds can understand the essence of the technique, its capabilities and limitations. At the end of the article, I will provide links to a few resources for implementing RL. Broadly speaking, data-driven algorithms can be categorized into three types: Supervised, Unsupervised, and Reinforcement learning. The first two are generally used to perform tasks such as image classification, detection, etc.


University of Huddersfield - University of the Year 2013

#artificialintelligence

Professor of Artificial Intelligence Wolfgang Faber comments on Google announcing that its AlphaGo Zero artificial intelligence program has triumphed at chess against world-leading specialist software within hours of teaching itself the game from scratch and considers where humans will start losing their jobs to intelligent computers and machines. "'Google's'superhuman' DeepMind AI claims chess crown' has been a headline on the BBC recently. What does it mean, and are our jobs, or even our lives in danger? First, let us have a look at what caused this headline: A few days ago, a manuscript by a group around David Silver, Thomas Hubert, and Julian Schrittwieser of London-based, Google (or rather Alphabet)-owned DeepMind was uploaded to arXiv, in which the system AlphaZero is described and very impressive results in learning how to play three traditional board games (chess, shogi, Go) well are reported. The setup allowed for learning very successful (superhuman) strategies in a few hours only.


StreamSets updates ETL to the cloud data pipeline

ZDNet

The emergence of real-time streaming analytics use cases has shifted the center of gravity for managing real-time processes. Because they operate in the moment, streaming engines by nature have been confined to performing rudimentary operations such as monitoring, filtering, light transformations of data. But as the need for performing more complex operations, such as using streaming data to retrain machine learning models, data pipelines have gained new prominence. Data pipelines spick up where streaming and message queuing systems leave off. They provide end-to-end management of data flows from ingest through buffering, filtering, transformation and enrichment, and basic analytic functions that can be squeezed into real time.


archivist: Boost the reproducibility of your research

@machinelearnbot

A few days ago Journal of Statistical Software has published our article (in collaboration with Marcin Kosiński) archivist: An R Package for Managing, Recording and Restoring Data Analysis Results. Would you want to retrieve a ggplot2 object with the plot on the right? Just call the following line in your R console. Want to check versions of packages loaded when the plot was created? When people talk about reproducibility, usually they focus on tools like packrat, MRAN, docker or RSuite.


google/kubeflow

#artificialintelligence

The Kubeflow project is dedicated to making Machine Learning on Kubernetes easy, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way for spinning up best of breed OSS solutions. This document details the steps needed to run the kubeflow project in any environment in which Kubernetes runs. Our goal is to help folks use ML more easily, by letting Kubernetes to do what it's great at: Because ML practitioners use so many different types of tools, it is a key goal that you can customize the stack to whatever your requirements (within reason), and let the system take care of the "boring stuff." While we have started with a narrow set of technologies, we are working with many different projects to include additional tooling.


NIPS 2017 -- Day 3 Highlights – Insight Data

@machinelearnbot

Pieter started his invited talk by summarizing some of the key differences between supervised learning and Reinforcement Learning (RL). In essence, RL is mainly concerned with learning an effective policy to have an agent interact with the world in a way that best achieves a goal. For example, learning a policy on how to walk. Recently, RL has seen many success stories, such as learning to play Atari games from the raw pixel inputs, mastering the game of Go to a superhuman level, or effectively teaching simulated characters how to walk from scratch. However, one big gap between RL algorithms and humans, remains the time it takes to acquire new and effective policies.


Reinforcement learning - Scholarpedia

#artificialintelligence

Reinforcement learning (RL) is learning by interacting with an environment. An RL agent learns from the consequences of its actions, rather than from being explicitly taught and it selects its actions on basis of its past experiences (exploitation) and also by new choices (exploration), which is essentially trial and error learning. The reinforcement signal that the RL-agent receives is a numerical reward, which encodes the success of an action's outcome, and the agent seeks to learn to select actions that maximize the accumulated reward over time. In general we are following Marr's approach (Marr et al 1982, later re-introduced by Gurney et al 2004) by introducing different levels: the algorithmic, the mechanistic and the implementation level. The best studied case is when RL can be formulated as class of Markov Decision Problems (MDP).


A Reality Checklist for your Deep Learning Project – Intuition Machine – Medium

#artificialintelligence

Where is Deep Learning applicable? This is one of the more fleeting ideas to understand about Deep Learning and related A.I. technologies. It is all too easy to fall in the trap that a "Artificial Intelligence" application can solve your problem. The usual coverage of this problem involves the question of "do you have enough data?" Unfortunately, that is too vague in that to answer this you have to at least understand your problem domain.


ConferenceCall 2017 04 05 - OntologPSMW

#artificialintelligence

Please use the chatroom above. Do not use the video teleconference chat, which is only for communicating with the moderator. When you use the Video Conference URL above, you will be given the choice of using the computer audio or using your own telephone. Some attendees had difficulties when using the computer audio choice. If this happens to you, please leave the meeting and reenter it using the telephone choice with access code 768423137.


apple/turicreate

#artificialintelligence

Turi Create simplifies the development of custom machine learning models. You don't have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app. It's easy to use the resulting model in an iOS application: For detailed instructions for different varieties of Linux see LINUX_INSTALL.md. For common installation issues see INSTALL_ISSUES.md. We recommend using virtualenv to use, install, or build Turi Create.