"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
I have a pretty awesome backlog of blog posts from Udacity Self-Driving Car students, partly because they're doing awesome things and partly because I fell behind on reviewing them for a bit. Here are five that look pretty neat. This is a great blog post if you're looking to get started with point cloud files. The most popular laptop among Silicon Valley software developers is the Macbook Pro. The current version of the Macbook Pro, however, does not include an NVIDIA GPU, which restricts its ability to use CUDA and cuDNN, NVIDIA's tools for accelerating deep learning.
Welcome back to Mind Over Money. I'm Kevin Cook, your field guide and story teller for the fascinating arena of Behavioral Economics. Since I am an investor in an exciting technology company you may have heard of called NVIDIA (NVDA), I often find myself in the position of having to explain to my followers and fellow investors "what exactly is AI" in a practical, right-now sense, and not some science fiction sense. NVDA's type of computer chip, the GPU, is at the heart of modern AI R&D and they sell a lot of them not just for advanced gaming graphics but also to industry for applications in autonomous driving where Tesla (TSLA), Toyota and Mercedes are customers. NVDA also has a bigger business selling their processors to big cloud companies like Amazon, Google (GOOGL), Microsoft, IBM (IBM) and Alibaba (BABA).
Intel CEO Brian Krzanich speaks at a 2016 AI event. Intel might be an old-school computing company, but the chipmaker thinks the latest trends in artificial intelligence will keep it an important part of your high-tech life. AI technology called machine learning today is instrumental to taking good photos, translating languages, recognizing your friends on Facebook, delivering search results, screening out spam and many other chores. It usually uses an approach called neural networks that works something like a human brain, not a sequence of if-this-then-that steps as in traditional computing. Lots of companies, including Apple, Google, Qualcomm and Nvidia, are designing chips to accelerate this sort of work.
Leading stock photo company Shutterstock unveiled a new deep learning-based tool that lets users search photos by their composition. "Built on our next generation visual similarity model, this tool helps you find the exact image you need by placing keywords on a canvas and moving them around where you want subject matter to appear in the image," mentioned Kevin Lester, VP of Engineering at Shutterstock in a related blog. "The patent-pending spatially aware technology will find strong matches based not only on your search terms, but also on the placement of your search terms." Using TITAN X GPUs and the cuDNN-accelerated Torch deep learning framework, the researchers trained their visual model on their own internal image dataset and the language model to match a textual query to the embedding of a corresponding image. Once trained, they leverage Tesla GPUs on the Amazon cloud to give users total control over the image composition on any project – such as being able to use search terms like "wine" and "cheese" and being able to drag it around so photos of "wine" are on the left and "cheese" on the right.
When the AI boom came a-knocking, Intel wasn't around to answer the call. Now, the company is attempting to reassert its authority in the silicon business by unveiling a new family of chips designed especially for artificial intelligence: the Intel Nervana Neural Network Processor family, or NNP for short. The NNP family is meant as a response to the needs of machine learning, and is destined for the data center, not your PC. Intel's CPUs may still be a stalwart of server stacks (by some estimates, it has a 96 percent market share in data centers), but the workloads of contemporary AI are much better served by the graphical processors or GPUs coming from firms like Nvidia and ARM. Consequently, demand for these companies' chips has skyrocketed.
Intel enlisted one of the most enthusiastic users of deep learning and artificial intelligence to help out with the chip design. "We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement. On top of social media, Intel is targeting healthcare, automotive and weather, among other applications. Unlike its PC chips, the Nervana NNP is an application-specific integrated circuit (ASIC) that's specially made for both training and executing deep learning algorithms. "The speed and computational efficiency of deep learning can be greatly advanced by ASICs that are customized for ... this workload," writes Intel's VP of AI, Naveen Rao.
Product design and development firm Cambridge Consultants developed a deep learning-based system that turns human sketches into paintings that resemble Van Gogh, Cézanne and Picasso. "What we've built would have been unthinkable to the original deep learning pioneers," said Monty Barlow, director machine learning at Cambridge Consultants in reference to their interactive system that call Vincent. "By successfully combining different machine learning approaches, such as adversarial training, perceptual loss, and end-to-end training of stacked networks, we've created something hugely interactive, taking the germ of a sketched idea and allowing the history of human art to run with it." Once trained on nearly 200 million parameters, Vincent is able to understand the important edges in paintings and uses this understanding to produce a complete picture.
The issue lies with a prevalent tactic in AI development called "back propagation". Geoffrey Hinton has been called the "Godfather of Deep Learning". It relates directly to how AIs learn and store information. Since its conception, back propagation algorithms have become the "workhorses" of the majority of AI projects.
NVIDIA GPUs have been on the forefront of accelerated neural network processing and are the de facto standard for accelerated neural network research and development (R&D) plus deep learning training. At the NVIDIA GPU Technology Conference (GTC) in Beijing China earlier this week, the company maneuvered to also become the de facto standard for accelerated neural network inference deployment. At GTC Beijing, NVIDA lined up the major Chinese cloud companies for AI computing: Alibaba Cloud, Baidu Cloud, and Tencent Cloud. At GTC-Beijing, it announced inference designs with Alibaba Cloud, Tencent, Baidu Cloud, JD.com, and iFlytek.
NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. First, and perhaps most importantly, Huang announced new TensorRT3 software that optimizes trained neural networks for inference processing on NVIDIA GPUs. In addition to announcing the Chinese deployment wins, Huang provided some pretty compelling benchmarks to demonstrate the company's prowess in accelerating Machine Learning inference operations, in the datacenter and at the edge. In addition to the TensorRT3 deployments, Huang announced that the largest Chinese Cloud Service Providers, Alibaba, Baidu, and Tencent, are all offering the company's newest Tesla V100 GPUs to their customers for scientific and deep learning applications.