If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Moore's Law says that the number of transistors per square inch will double approximately every 18 months. This article will show how many technologies are providing us with a new Virtual Moore's Law that proves computer performance will at least double every 18 months for the foreseeable future thanks to many new technological developments. This Virtual Moore's Law is propelling us towards the Singularity where the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. In the first of my "proof" articles two years ago, I described how it has become harder to miniaturize transistors, causing computing to go vertical instead. 2 years ago, Samsung was mass producing 24-layer 3D NAND chips and had announced 32-layer chips. As I write this, Samsung is mass producing 48-layer 3D NAND chips with 64-layer chips rumored to appear within a month or so.
Born of research in the Amazon forest, the Plantix mobile app is helping farmers on three continents quickly identify plant diseases using artificial intelligence. For several years in the Brazilian rain forest, a team of young German researchers studied the emission and mitigation of greenhouse gases due to changing land use. The team's analysis was yielding new knowledge, but the farmers they worked with weren't interested in those findings. They wanted to know how to treat crops being ravaged by pathogens. "They couldn't understand why we can estimate the carbon stock of their soil, but we couldn't give them an idea of how to treat damaged plants in an appropriate way," said Robert Strey, one of the researchers.
Just over five years ago, IBM's Watson supercomputer crushed opponents in the televised quiz show Jeopardy. It was hard to foresee then, but artificial intelligence is now permeating our daily lives. Since then, IBM has expanded the Watson brand to a cognitive computing package with hardware and software used to diagnose diseases, explore for oil and gas, run scientific computing models, and allow cars to drive autonomously. The company has now announced new AI hardware and software packages. The original Watson used advanced algorithms and natural language interfaces to find and narrate answers.
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, will discuss the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They will also review two "free infrastructure" programs available to startups and innovators. Speaker Bios Harold Hannon has worked in the field of software development as both an architect and developer for more than 15 years, with a focus on workflow, integration, and distributed systems.
From ISC 2016 in Frankfurt, Germany, this week, Intel Corp. launched the second-generation Xeon Phi product family, formerly code-named Knights Landing, aimed at HPC and machine learning workloads. "We're not just a specialized programming model," said Intel's General Manager, HPC Compute and Networking, Barry Davis in a hand-on technical demo held at ISC. "Knights Landing" also puts integrated on-package memory in a processor, which benefits memory bandwidth and overall application performance. The Pascal P100 GPU for NVLink-optimized servers offers 5.3 teraflops of double-precision floating point performance, and the PCIe version supports 4.7 teraflops of double-precision.
The three -- NVIDIA DIGITS 4, CUDA Deep Neural Network Library (cuDNN) 5.1 and the new GPU Inference Engine (GIE) -- are powerful tools that make it even easier to create solutions on our platform. NVIDIA DIGITS 4 introduces a new object detection workflow, enabling data scientists to train deep neural networks to find faces, pedestrians, traffic signs, vehicles and other objects in a sea of images. The GPU Inference Engine is a high-performance deep learning inference solution for production environments. Automotive manufacturers and embedded solutions providers can deploy powerful neural network models with high performance in their low-power platforms.
Originally used to generate high-resolution computer images at fast speeds, the GPU's computational efficiency makes it ideal for executing deep learning algorithms. This section lists the main components of your deep learning box. Nvidia Digits is a user-friendly platform that allows you to train prediction models using deep learning techniques. If you're new to deep learning, you can also test the techniques in the cloud first, using Google's Cloud Machine Learning platform.
As GPU maker Nvidia's CEO stressed at this year's GPU Technology Conference, deep learning is a target market, fed in part by a new range of their GPUs for training and executing deep neural networks, including the Tesla M40, M4, the existing supercomputing-focused K80, and now, the P100 (Nvidia's latest Pascal processor, which is at the heart of a new appliance specifically designed for deep learning workloads). While cloud rival Amazon Web Services, among others, are sporting GPU cards for high performance computing (HPC) and deep learning users, the partnership between Nvidia and IBM is giving Big Blue a leg up in terms of making a wider array of GPUs available to suit different workloads. Today that suite of GPU options was enriched with the addition of the virtualization-ready Nvidia M60 cards, which can support a wider range of workloads--from HPC applications, to machine learning workloads, to virtual services and gaming platforms. As our own Timothy Prickett Morgan noted earlier this year, at the moment, Nvidia identifies six cloud providers that provide cloud-based GPU capacity or hosted GPU capacity.
Microsoft has been using a type of programmable chip called Field Programmable Gate Arrays to improve its hardware for machine learning, which typically requires a large amount of computing power. Last year, Google released Tensor Flow, the software engine that powers its machine learning systems, free to the public via an open-source license. But while Google's chip is helping improve its machine learning tools, the company likely isn't in a position to abandon GPUs and processors made by other companies entirely, Patrick Moorhead, an analyst at Moore Insights & Strategy, told PCWorld. It began using the TPU last April to help its StreetView software better process images, Jouppi told the Journal, speeding up the processing time for all of its images to just five days.
With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. We also extended our VR platform by adding special kits to our VR work software development kit that helps to provide an even greater sense of presence with NVR. The P100 utilizes a combination of technologies including NVLink, our high speed interconnect to learning application performance to scale on multiple GPUs, primarily bandwidth and multiple hardcore features design to natively accelerate AI applications. Universities hyperscale vendors and large enterprises developing AI based applications are showing strong interest in the system.