If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
All the shiny and zippy hardware in the world is meaningless without software, and that software can only go mainstream if it is easy to use. It has taken Linux two decades to get enterprise features and polish, and Windows Server took as long, too. So did a raft of open source middleware applications for storing data and interfacing back-end databases and datastores with Web front ends. Now, it is time for HPC and AI applications, and hopefully, it won't take this long. As readers of The Next Platform know full well, HPC applications are not new.
"Deep Learning has had a huge impact on computer science, making it possible to explore new frontiers of research and to develop amazingly useful products that millions of people use every day." With innovation driving business success, the demand for community-based, open-source software that incorporates AI & deep learning is taking over start-ups and enterprises alike. We've rounded up a few successful deep learning technologies that are making a big impact. TensorFlow is an open source software library that uses data flow graphs for numerical computation. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays communicated between them.
We're pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is to make deep learning accessible to every person on every device, and we're building PlaidML to help make that a reality. We're starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel.
IBM is doubling down on AI: releasing new software to help train machine-learning models and talking up the potential for its new Power9 systems to accelerate intelligent software. Today IBM unveiled new software that will make it easier to train machine-learning models to take decisions and extract insights from big data. The Deep Learning Impact software tools will help users develop AI models using popular open-source, deep-learning frameworks, such as TensorFlow and Caffe, and will be added to IBM's Spectrum Conductor software from December. Alongside the software reveal, IBM has been talking up new systems based around its new Power9 processor -- which are on display at this year's SC17 event. IBM says these systems are tailored towards AI workloads, due to their ability to rapidly shuttle data between between Power9 CPUs and hardware accelerators, such as GPUs and FPGAs, commonly used both in training and running machine-learning models.
An introduction to cloud computing from IaaS and PaaS to hybrid, public and private cloud. Amazon Web Services (AWS) has launched new P3 instances on its EC2 cloud computing service which are powered by Nvidia's Tesla Volta architecture V100 GPUs and promise to dramatically speed up the training of machine learning models. The P3 instances are designed to handle compute-intensive machine learning, deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modelling, and genomics workloads. Amazon said the new services could reduce the training time for sophisticated deep learning models from days to hours. These are the first instances to include Nvidia Tesla V100 GPUs, and AWS said its P3 instances are "the most powerful GPU instances available in the cloud".
Artificial intelligence: everybody is talking about it, and the as-of-yet unrealized possibilities of the technology are fueling a renaissance in the hardware and software industry. Hardware and software companies -- including Intel, NVidia, Google, IBM, Microsoft, Facebook, Qualcomm, ARM and many others -- are racing to build the next AI hardware platform or fighting to maintain their lead. AI, and deep learning (a sub-field of neural networks) in particular is an inherently non-Von Neumann process, and the prospect of having a processor more closely tailored to the specific needs of neural networks is appealing. But, I like to think before acting, especially before diving into a potentially very expensive hardware project. Should the AI industry build a specialized deep learning chip, and, if so, what should it look like?
NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. First, and perhaps most importantly, Huang announced new TensorRT3 software that optimizes trained neural networks for inference processing on NVIDIA GPUs. In addition to announcing the Chinese deployment wins, Huang provided some pretty compelling benchmarks to demonstrate the company's prowess in accelerating Machine Learning inference operations, in the datacenter and at the edge. In addition to the TensorRT3 deployments, Huang announced that the largest Chinese Cloud Service Providers, Alibaba, Baidu, and Tencent, are all offering the company's newest Tesla V100 GPUs to their customers for scientific and deep learning applications.
The story behind the story: a finely tuned generative adversarial network that sampled 8,000 great works of art -- a tiny sample size in the data-intensive world of deep learning -- and in just 14 hours of training on an NVIDIA DGX system created an application that takes human input and turns it into something stunning. Building on thousands of hours of research undertaken by Cambridge Consultants' AI research lab, the Digital Greenhouse, a team of five built the Vincent demo in just two months. After Huang's keynote, GTC attendees had the opportunity to pick up the stylus for themselves, selecting from one of seven different styles to sketch everything from portraits to landscapes to, of course, cats. While traditional deep learning algorithms have achieved stunning results by ingesting vast quantities of data, GANs create applications out of much smaller sample sizes by training one neural network to try to imitate the data they're fed, and another to try to spot fakes.
Just in time for the fall sports season, researchers are developing an AI-powered app that detects concussions right on the playing field. Working with a team of UW researchers and clinicians, he is using GPU-accelerated deep learning to create an app that detects concussions and other traumatic brain injuries with nothing more than a smartphone camera and 3D-printed box. The app, called PupilScreen, assesses the pupil's response to light almost as well as a pupilometer, an expensive machine found only in clinical settings. In a pilot study of 42 patients with and without traumatic brain injury, the app tracked pupil size almost as well the pupilometer.
Nvidia has gone ahead with open sourcing the design of one of its AI chips designed to power deep learning. And, by releasing its chip design to open source, Nvidia wants AI chip makers to help bridge this gap. With other chip manufacturers using its chip design technology, Nvidia plans to augment sale of its other hardware and software. The chip module, known as Deep Learning Accelerator (DLA), for which Nvidia has released the design to open source is used for autonomous vehicles and associated technologies.