Amazon Web Services & MxNET


This repo contains an incremental sequence of notebooks designed to teach deep learning, Apache MXNet (incubating), and the gluon interface. Our goal is to leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place. If we're successful, the result will be a resource that could be simultaneously a book, course material, a prop for live tutorials, and a resource for plagiarising (with our blessing) useful code. To our knowledge there's no source out there that teaches either (1) the full breadth of concepts in modern deep learning or (2) interleaves an engaging textbook with runnable code. We'll find out by the end of this venture whether or not that void exists for a good reason.

GPU-Accelerated Amazon Web Services


Developers, data scientists, and researchers are solving today's complex challenges with breakthroughs in artificial intelligence, deep learning, and high performance computing (HPC). NVIDIA is working with Amazon Web Services to offer the newest and most powerful GPU-accelerated cloud service based on the latest NVIDIA Volta architecture: Amazon EC2 P3 instance. Using up to eight NVIDIA Tesla V100 GPUs, you will be able to train your neural networks with massive data sets using any of the major deep learning frameworks faster than ever before.

New – AWS Deep Learning Containers Amazon Web Services


We want to make it as easy as possible for you to learn about deep learning and to put it to use in your applications. If you know how to ingest large datasets, train existing models, build new models, and to perform inferences, you'll be well-equipped for the future! New Deep Learning Containers Today I would like to tell you about the new AWS Deep Learning Containers. These Docker images are ready to use for deep learning training or inferencing using TensorFlow or Apache MXNet, with other frameworks to follow. We built these containers after our customers told us that they are using Amazon EKS and ECS to deploy their TensorFlow workloads to the cloud, and asked us to make that task as simple and straightforward as possible.

2016 might seem like the year of AI, but we could be getting ahead of ourselves


Unsupervised learning, by contrast, is much harder. It is best thought of as a continuum between (a) the entire system being one gigantic, autonomous, self-learning machine and (b) solving certain problems within a much larger system that also involves humans and supervised learning techniques. For many enterprise solutions we are very close to (b). For personal assistants like Siri, we are a little closer to (a), but even in such applications true autonomous AI is still quite far away. Imagine the amount of human intervention that will need to happen on the back-end, or how many special cases must be handled by editors or trainers in teaching the system.