Results


TensorFlow* Optimizations on Modern Intel Architecture

@machinelearnbot

TensorFlow* is a leading deep learning and machine learning framework, which makes it important for Intel and Google to ensure that it is able to extract maximum performance from Intel's hardware offering. This paper introduces the Artificial Intelligence (AI) community to TensorFlow optimizations on Intel Xeon and Intel Xeon Phi processor-based platforms. These optimizations are the fruit of a close collaboration between Intel and Google engineers announced last year by Intel's Diane Bryant and Google's Diane Green at the first Intel AI Day. We describe the various performance challenges that we encountered during this optimization exercise and the solutions adopted. We also report out performance improvements on a sample of common neural networks models.


Search for the fastest Deep Learning Framework supported by Keras

@machinelearnbot

Currently the official Keras release already supports Google's TensorFlow and Microsoft's CNTK deep learning libraries besides supporting other popular libraries like Theano. Keras also enables developers to quickly test relative performance across multiple supported deep learning frameworks. This is because MXNet doesn't yet support newer Keras functions and scripts would have needed significant changes before running on MXNet. In a standard Deep neural network test using MNIST dataset, CNTK, TensorFlow and Theano achieve similar scores (2.5 – 2.7 s/epoch) but MXNet blows it out of the water with 1.4s/epoch timing.


Time series classification with Tensorflow

@machinelearnbot

A similar situation arises in image classification, where manually engineered features (obtained by applying a number of filters) could be used in classification algorithms. I will compare the performance of typical machine learning algorithms which use engineered features with two deep learning methods (convolutional and recurrent neural networks) and show that deep learning can surpass the performance of the former. The rest of the implementation is pretty typical, and involve feeding the graph with batches of training data and evaluating the performance on a validation set. The rest is pretty standard for LSTM implementations, involving construction of layers (including dropout for regularization) and then an initial state.


Time series classification with Tensorflow

@machinelearnbot

A similar situation arises in image classification, where manually engineered features (obtained by applying a number of filters) could be used in classification algorithms. I will compare the performance of typical machine learning algorithms which use engineered features with two deep learning methods (convolutional and recurrent neural networks) and show that deep learning can surpass the performance of the former. The rest of the implementation is pretty typical, and involve feeding the graph with batches of training data and evaluating the performance on a validation set. The rest is pretty standard for LSTM implementations, involving construction of layers (including dropout for regularization) and then an initial state.


PyTorch or TensorFlow?

@machinelearnbot

PyTorch is essentially a GPU enabled drop-in replacement for NumPy equipped with higher-level functionality for building and training deep neural networks. In PyTorch the graph construction is dynamic, meaning the graph is built at run-time. TensorFlow does have thedynamic_rnn for the more common constructs but creating custom dynamic computations is more difficult. I haven't found the tools for data loading in TensorFlow (readers, queues, queue runners, etc.)


Time series classification with Tensorflow

@machinelearnbot

A similar situation arises in image classification, where manually engineered features (obtained by applying a number of filters) could be used in classification algorithms. I will compare the performance of typical machine learning algorithms which use engineered features with two deep learning methods (convolutional and recurrent neural networks) and show that deep learning can surpass the performance of the former. The rest of the implementation is pretty typical, and involve feeding the graph with batches of training data and evaluating the performance on a validation set. The rest is pretty standard for LSTM implementations, involving construction of layers (including dropout for regularization) and then an initial state.


PyTorch or TensorFlow?

@machinelearnbot

PyTorch is essentially a GPU enabled drop-in replacement for NumPy equipped with higher-level functionality for building and training deep neural networks. In PyTorch the graph construction is dynamic, meaning the graph is built at run-time. TensorFlow does have the dynamic_rnn for the more common constructs but creating custom dynamic computations is more difficult. I haven't found the tools for data loading in TensorFlow (readers, queues, queue runners, etc.)


Is PyTorch Better Than TensorFlow?

#artificialintelligence

Is PyTorch better than TensorFlow for general use cases? TensorFlow is built around a concept of Static Computational Graph (SCG). A network written in PyTorch is a Dynamic Computational Graph (DCG). I like that there are many interesting ways to optimize different processes in TF, from parallel training with queues, to almost-built-in weight quantization [1].


Using Deep Learning to Reconstruct High-Resolution Audio

#artificialintelligence

The training workflow outlined in the above figure uses the downsampled clips of the data preprocessing steps and batch-feeds them into the model (a deep neural network) to update its weights. The above figure shows two quantitative measures of performance on a test sample after 10 epochs of training. Higher SNR values represent clearer-sounding audio while lower LSD values indicate matching frequency content. The LSD value shows the neural network is attempting to restore the higher frequencies wherever appropriate.


Building a Real-Time Object Recognition App with Tensorflow and OpenCV

@machinelearnbot

In this article, I will walk through the steps how you can easily build your own real-time object recognition application with Tensorflow's (TF) new Object Detection API and OpenCV in Python 3 (specifically 3.5). Google has just released their new TensorFlow Object Detection API. I wanted to lay my hands on this new cool stuff and had some time to build a simple real-time object recognition demo. And definitely have a look at the Tensorflow Object Detection API.