Goto

Collaborating Authors

Introduction to PyTorch

#artificialintelligence

Recently, Microsoft and PyTorch announced a "PyTorch Fundamentals" tutorial, which you can find on Microsoft's site and on PyTorch's site. The code in this post is based on the code appearing in that tutorial, and forms the foundation for a series of other posts, where I'll explore other machine learning frameworks and show integration with Azure ML. In this post, I'll explain how you can create a basic neural network in PyTorch, using the Fashion MNIST dataset as a data source. The neural network we'll build takes as input images of clothing, and classifies them according to their contents, such as "Shirt," "Coat," or "Dress." I'll assume that you have a basic conceptual understanding of neural networks, and that you're comfortable with Python, but I assume no knowledge of PyTorch. Let's start by getting familiar with the data we'll be using, the Fashion MNIST dataset.


Neural Networks for Beginners. A fast implementation in Matlab, Torch, TensorFlow

arXiv.org Machine Learning

This report provides an introduction to some Machine Learning tools within the most common development environments. It mainly focuses on practical problems, skipping any theoretical introduction. It is oriented to both students trying to approach Machine Learning and experts looking for new frameworks.


Recurrent neural networks and LSTM tutorial in Python and TensorFlow - Adventures in Machine Learning

@machinelearnbot

In the deep learning journey so far on this website, I've introduced dense neural networks and convolutional neural networks (CNNs) which explain how to perform classification tasks on static images. We've seen good results, especially with CNN's. However, what happens if we want to analyze dynamic data? There are ways to do some of this using CNN's, but the most popular method of performing classification and other analysis on sequences of data is recurrent neural networks. This tutorial will be a very comprehensive introduction to recurrent neural networks and a subset of such networks โ€“ long-short term memory networks (or LSTM networks). I'll also show you how to implement such networks in TensorFlow โ€“ including the data preparation step. It's going to be a long one, so settle in and enjoy these pivotal networks in deep learning โ€“ at the end of this post, you'll have a very solid understanding of recurrent neural networks and LSTMs.


Chainer: A Deep Learning Framework for Accelerating the Research Cycle

arXiv.org Machine Learning

Software frameworks for neural networks play a key role in the development and application of deep learning methods. In this paper, we introduce the Chainer framework, which intends to provide a flexible, intuitive, and high performance means of implementing the full range of deep learning models needed by researchers and practitioners. Chainer provides acceleration using Graphics Processing Units with a familiar NumPy-like API through CuPy, supports general and dynamic models in Python through Define-by-Run, and also provides add-on packages for state-of-the-art computer vision models as well as distributed training.


Machine Learning Crash Course Google Developers

@machinelearnbot

Layers are Python functions that take Tensors and configuration options as input and produce other tensors as output. Once the necessary Tensors have been composed, the user can convert the result into an Estimator via a model function.