Amazon Web Services & MxNET

VideoLectures.NET

This repo contains an incremental sequence of notebooks designed to teach deep learning, Apache MXNet (incubating), and the gluon interface. Our goal is to leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place. If we're successful, the result will be a resource that could be simultaneously a book, course material, a prop for live tutorials, and a resource for plagiarising (with our blessing) useful code. To our knowledge there's no source out there that teaches either (1) the full breadth of concepts in modern deep learning or (2) interleaves an engaging textbook with runnable code. We'll find out by the end of this venture whether or not that void exists for a good reason.


GPU-Accelerated Amazon Web Services

#artificialintelligence

Developers, data scientists, and researchers are solving today's complex challenges with breakthroughs in artificial intelligence, deep learning, and high performance computing (HPC). NVIDIA is working with Amazon Web Services to offer the newest and most powerful GPU-accelerated cloud service based on the latest NVIDIA Volta architecture: Amazon EC2 P3 instance. Using up to eight NVIDIA Tesla V100 GPUs, you will be able to train your neural networks with massive data sets using any of the major deep learning frameworks faster than ever before.




2016 might seem like the year of AI, but we could be getting ahead of ourselves

#artificialintelligence

Unsupervised learning, by contrast, is much harder. It is best thought of as a continuum between (a) the entire system being one gigantic, autonomous, self-learning machine and (b) solving certain problems within a much larger system that also involves humans and supervised learning techniques. For many enterprise solutions we are very close to (b). For personal assistants like Siri, we are a little closer to (a), but even in such applications true autonomous AI is still quite far away. Imagine the amount of human intervention that will need to happen on the back-end, or how many special cases must be handled by editors or trainers in teaching the system.