Results


TensorFlow, MXNet, Caffe, H2O - Which Machine Learning Tool Is Best?

#artificialintelligence

Successful organizations tend to keep several options at hand in part because no single machine learning tool fits every situation, data set or scale. TensorFlowTM is a very popular technology specialized for deep learning that was released under an Apache 2.0 open source license in November 2015 after being developed by Google researchers in the Google Brain Team. This older open source machine learning technology offers a broader foundation for machine learning, not just focused on deep learning, although that is included. A January 2017 TechCrunch article by John Mannes reported that around 20% of Fortune 500 companies use H20.


Reinforcement learning for complex goals, using TensorFlow

#artificialintelligence

To allow for greater flexibility, I will then describe how to build a class of reinforcement learning agents, which can optimize for various goals called "direct future prediction" (DFP). Reinforcement learning involves agents interacting in some environment to maximize obtained rewards over time. Q-learning and other traditionally formulated reinforcement learning algorithms learn a single reward signal, and as such, can only pursue a single "goal" at a time. If we want our drone to learn to deliver packages, we simply provide a positive reward of 1 for successfully flying to a marked location and making a delivery.


The truth behind Facebook AI inventing a new language

#artificialintelligence

In the particular case of the Facebook negotiation chat bot, you give it examples of negotiation dialogs with the whole situation properly annotated -- what the initial state was, the preferences of the negotiator, what was said, what the result was, etc. The program analyzes all these examples, extracts some features of each dialog, and assigns a number to these features, representing how often dialogs with that feature ended in positive results for the negotiator. AlphaGo started learning from real games played by real people. The original training data set was in English, but the extracted features were just words and phrases, and the robot was just putting them together based on the numerical representation of how likely they were going to help get the desired outcome.