training


Deep Learning for Disaster Recovery – Insight Data

#artificialintelligence

With global climate change, devastating hurricanes are occurring with higher frequency. After a hurricane, roads are often flooded or washed out, making them treacherous for motorists. According to The Weather Channel, almost two of every three U.S. flash flood deaths from 1995–2010, excluding fatalities from Hurricane Katrina, occurred in vehicles. During my Insight A.I. Fellowship, I designed a system that detects flooded roads and created an interactive map app. Using state of the art computer vision deep learning methods, the system automatically annotates flooded, washed out, or otherwise severely damaged roads from satellite imagery.


Getting started with a TensorFlow surgery classifier with TensorBoard data viz

#artificialintelligence

The most challenging part of deep learning is labeling, as you'll see in part one of this two-part series, Learn how to classify images with TensorFlow. Proper training is critical to effective future classification, and for training to work, we need lots of accurately labeled data. In part one, I skipped over this challenge by downloading 3,000 prelabeled images. I then showed you how to use this labeled data to train your classifier with TensorFlow. In this part we'll train with a new data set, and I'll introduce the TensorBoard suite of data visualization tools to make it easier to understand, debug, and optimize our TensorFlow code.


Distributing control of deep learning training delivers 10x performance improvement

#artificialintelligence

My IBM Research AI team and I recently completed the first formal theoretical study of the convergence rate and communications complexity associated with a decentralized distributed approach in a deep learning training setting. The empirical evidence proves that in specific configurations, a decentralized approach can result in a 10x performance boost over a centralized approach without additional complexity. A paper describing our work has been accepted for oral presentation at the NIPS 2017 Conference, one of the 40 out of 3240 submissions selected for this. Supervised machine learning generally consists of two phases: 1) training (building a model) and 2) inference (making predictions with the model). The training phase involves finding optimal values for a model's parameters such that error on a set of training examples is minimized, and the model generalizes to new data.


Generalization Theory and Deep Nets, An introduction

@machinelearnbot

Deep learning holds many mysteries for theory, as we have discussed on this blog. Lately many ML theorists have become interested in the generalization mystery: why do trained deep nets perform well on previously unseen data, even though they have way more free parameters than the number of datapoints (the classic "overfitting" regime)? Zhang et al.'s paper Understanding Deep Learning requires Rethinking Generalization played some role in bringing attention to this challenge. Their main experimental finding is that if you take a classic convnet architecture, say Alexnet, and train it on images with random labels, then you can still achieve very high accuracy on the training data. Needless to say, the trained net is subsequently unable to predict the (random) labels of still-unseen images, which means it doesn't generalize.


Introduction To Neural Networks

@machinelearnbot

This tutorial was originally posted here on Ben's blog, GormAnalysis. Artificial Neural Networks are all the rage. One has to wonder if the catchy name played a role in the model's own marketing and adoption. I've seen business managers giddy to mention that their products use "Artificial Neural Networks" and "Deep Learning". Would they be so giddy to say their products use "Connected Circles Models" or "Fail and Be Penalized Machines"?


Fine-tuning Convolutional Neural Network on own data using Keras Tensorflow

@machinelearnbot

Keras is winning the world of deep learning. In this tutorial, we shall learn how to use Keras and transfer learning to produce state-of-the-art results using very small datasets. We shall provide complete training and prediction code. For this comprehensive guide, we shall be using VGG network but the techniques learned here can be used to finetune Alexnet, Inception, Resnet or any other custom network architecture. In a previous tutorial, we used 2000 images of dog and cat to get a classification accuracy of 80%.


Automatic Speaker Recognition using Transfer Learning

#artificialintelligence

Even with today's frequent technological breakthroughs in speech-interactive devices (think Siri and Alexa), few companies have tried their hand at enabling multi-user profiles. Google Home has been the most ambitious in this area, allowing up to six user profiles. The recent boom of this technology is what made the potential for this project very exciting to our team. We also wanted to engage in a project that is still a hot topic in deep-learning research, create interesting tools, learn more about neural network architectures, and make original contributions where possible. We sought to create a system able to quickly add user profiles and accurately identify their voices with very little training data, a few sentences as most!


Scaling Deep Learning until systems reach human level performance or better NextBigFuture.com

#artificialintelligence

BAIDU results indicate that in many real world contexts, simply scaling your training data set and models is likely to predictably improve the model's accuracy. This predictable behavior may help practitioners and researchers approach debugging and target better accuracy scaling. On the extreme other end, @BaiduResearch's thorough analysis on scaling properties of neural networks would cost around $2 million USD on AWS Glad they did it and are exporting their knowledge _ pic.twitter.com/0OUYfpWXrK Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products.


Bringing Machine Learning (TensorFlow) to the enterprise with SAP HANA

@machinelearnbot

In this blog I aim to provide an introduction to TensorFlow and the SAP HANA integration, give you an understanding of the landscape and outline the process for using External Machine Learning with HANA. There's plenty of hype around Machine Learning, Deep Learning and of course Artificial Intelligence (AI), but understanding the benefits in an enterprise context can be more challenging. Being able to integrate the latest and greatest deep learning models into your enterprise via a high performance in-memory platform could provide a competitive advantage or perhaps just keep up with the competition? With HANA 2.0 SP2 onwards we have the ability to call TensorFlow (TF) models or graphs as they are known. HANA now includes a method to call External Machine Learning (EML) models via a remote source.


Comparison of Deepnet & Neuralnet

@machinelearnbot

Based on two R packages for neural networks. In this article, I compare two available R packages for using neural networks to model data: neuralnet and deepnet. Through the comparisons I highlight various challenges in finding good hyperparameter values. I show that some needed hyperparameters differ when using these two packages, even with the same underlying algorithmic approach. Both packages can be obtained via the R CRAN repository (see links at the end).