There are several outstanding training sessions and tutorials that cover PyTorch, TensorFlow, and other important topics at the O'Reilly Artificial Intelligence conference in San Jose, September 9-12, 2019. Early price ends August 2. In a recent survey--AI Adoption in the Enterprise, which drew more than 1,300 respondents--we found significant usage of several machine learning (ML) libraries and frameworks. About half indicated they used TensorFlow or scikit-learn, and a third reported they were using PyTorch or Keras. I recently attended an interesting RISELab presentation delivered by Caroline Lemieux describing recent work on AutoPandas and automation tools that rely on program synthesis.
On Thursday the developers of PyTorch announced PyTorch Mobile, which they say will allow for "end-to-end workflow from Python to deployment on iOS and Android." PyTorch Mobile is part of PyTorch 1.3, which currently is an "experimental release" that the organization will be "building on over the next couple of months." PyTorch 1.2 was released in August. New features coming will include preprocessing and integration APIs, support for ARM CPUs and QNNPACK (a quantized neural network package designed for PyTorch), build-level optimization, and performance enhancements for mobile CPUs/GPUs. Android builds will use the Maven plug-in and iOS will use CocoaPods with Swift.
We print out the PyTorch version we are using. The first thing we're going to do is we're going to define a PyTorch tensor and we're going to initialize it using the random functionality which pulls a random number between 0 to 1. Then we're going to multiply it by 100 so that we have a number between 0 to 100 and we cast it to an Int PyTorch tensor just so it's cleaner when we go to look at the numbers. Then we can see that it is a PyTorch IntTensor of size 2x3x4. Cast this one to an Int as well. When we print it, we can see that we have a PyTorch IntTensor of size 2x3x4.
Last fall, as part of our dedication to open source AI, we made PyTorch one of the primary, fully supported training frameworks on Azure. PyTorch is supported across many of our AI platform services and our developers participate in the PyTorch community, contributing key improvements to the code base. Today we would like to share the many ways you can use PyTorch 1.2 on Azure and highlight some of the contributions we've made to help customers take their PyTorch models from training to production. Getting started with PyTorch on Azure is easy and a great way to train and deploy your PyTorch models. We've integrated PyTorch 1.2 in the following Azure services so you can utilize the latest features: PyTorch is a popular open-source deep learning framework for creating and training models.
This video will show you how to infer dimensions while reshaping a PyTorch tensor by using the PyTorch view operation. Then we print the PyTorch version we are using. Let's now create a PyTorch tensor for our example. We see that it's a PyTorch FloatTensor of size 2x3x6, we have all our numbers from 1 to 36, inclusive, and we're going to use this tensor now to reshape it in a variety of ways and infer the shape. For the first PyTorch tensor reshape with inferred dimension example, let's retain the rank of the tensor which is 3 but we're going to change it from a 2x3x6 to a 2x9 to an unknown.