If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft's Project Olympus. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing." NVIDIA Joins Open Compute Project NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. Certain statements in this press release including, but not limited to, statements as to: the performance, impact and benefits of the HGX-1 hyperscale GPU accelerator; and NVIDIA joining the Open Compute Project are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
Both companies introduced new open-source hardware designs to ensure faster responses to such artificial intelligence services, and the designs will allow the companies to offer more services via their networks and software. Big Basin delivers on the promise of decoupling processing, storage, and networking units in data centers. Facebook's Big Basin system has eight Nvidia Tesla P100 GPU accelerators, connected in a mesh architecture via the super-fast NVLink interconnect. Microsoft's new server design has a universal motherboard slot that will support the latest server chips, including Intel's Skylake and AMD's Naples.
It's a beefier successor to Big Sur, the first-generation Facebook AI server unveiled last July. "With Big Basin, we can train machine learning models that are 30 percent larger because of the availability of greater arithmetic throughput and a memory increase from 12 GB to 16 GB," said Kevin Lee, a Technical Program Manager at Facebook. With this hardware, Facebook can train its machine learning systems to recognize speech, understand the content of video and images, and translate content from one language to another. Facebook has been designing its own hardware for many years, and In preparing to upgrade Big Sur, the Facebook engineering team gathered feedback from colleagues in Applied Machine Learning (AML), Facebook AI Research (FAIR), and infrastructure teams.
This GO brand describes the company's compute platform for autonomous driving capabilities, designed to be paired with their connectivity solutions. Intel's datacenter group also has the capability enable autonomous vehicle manufacturers to do their training in their datacenters using Intel technology as well. NVIDIA's approach to autonomous driving is more focused on the artificial intelligence (AI) and machine learning aspects of autonomous driving. While Intel has an end-to-end solution, NVIDIA offers their own unique approach that uses GPUs in the datacenter to do training and Tegra chips and GPUs in the car with their Drive PX2 to do inference.
This jointly optimized platform runs the new Microsoft Cognitive Toolkit (formerly CNTK) on NVIDIA GPUs, including the NVIDIA DGX-1 supercomputer, which uses Pascal architecture GPUs with NVLink interconnect technology, and on Azure N-Series virtual machines, currently in preview. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Certain statements in this press release including, but not limited to the impact and benefits of NVIDIA's and Microsoft's AI acceleration collaboration, Tesla GPUs, DGX-1, the Pascal architecture, NVLink interconnect technology and the Microsoft Cognitive Toolkit; the availability of Azure N-Series virtual machines; and the continuation of NVIDIA's and Microsoft's collaboration are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
SANTA CLARA, CA--(Marketwired - Nov 14, 2016) - NVIDIA (NASDAQ: NVDA) today announced that it is teaming up with the National Cancer Institute, the U.S. Department of Energy (DOE) and several national laboratories on an initiative to accelerate cancer research. Teams collaborating on CANDLE include researchers at the National Cancer Institute (NCI), Frederick National Laboratory for Cancer Research and DOE, as well as at Argonne, Oak Ridge, Livermore and Los Alamos National Laboratories. Georgia Tourassi, Director of the Health Data Sciences Institute at Oak Ridge National Laboratory, said, "Today cancer surveillance relies on manual analysis of clinical reports to extract important biomarkers of cancer progression and outcomes. Certain statements in this press release including, but not limited to, statements regarding the impact, benefits and goals of the Cancer Moonshot, the CANDLE AI framework, the combination of NVLink-enabled Pascal GPU architectures, and NVIDIA DGX-1; NVIDIA's participation in CANDLE; AI and deep learning techniques being essential to achieve the Cancer Moonshot objectives; expected gains in training neural networks for cancer research; large-scale data analytics and deep learning being central to Lawrence Livermore National Laboratory's missions; NVIDIA being at the forefront of accelerated machine learning; and CORAL/Sierra architectures being critical to developing scalable deep learning algorithms are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
The online education company Udacity is partnering with major companies in the field of autonomous vehicles to launch a nanodegree program for those interested in becoming a self-driving car engineer. Four major partners have committed to fast-tracking the nanodegree graduates into positions around the world: Mercedes-Benz, Nvidia, Otto (recently acquired by Uber) and Didi Chuxing. Each term of the program costs 800, and the first term begins in October. The other two instructors are David Silver, who was an autonomous vehicle engineer at Ford before joining Udacity, and Ryan Keenan, who according to LinkedIn, was a freelance data analyst before joining Udacity.
BEIJING, CHINA - GPU Technology Conference China -- NVIDIA (NASDAQ: NVDA) today unveiled a palm-sized, energy-efficient artificial intelligence (AI) computer that automakers can use to power automated and autonomous vehicles for driving and mapping. The new single-processor configuration of the NVIDIA DRIVE PX 2 AI computing platform for AutoCruise functions -- which include highway automated driving and HD mapping -- consumes just 10 watts of power and enables vehicles to use deep neural networks to process data from multiple cameras and sensors. Data scientists who train their deep neural networks in the data center on the NVIDIA DGX-1 can then seamlessly run on NVIDIA DRIVE PX 2 inside the vehicle. Certain statements in this press release including, but not limited to, statements as to: the features, impact, performance, benefits and availability of the NVIDIA DRIVE PX 2; and the deployment of the NVIDIA DRIVE PX 2 by Baidu are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
"Teaching computers to be able to detect and differentiate between objects -- to train them to understand what the patterns in the pixels mean -- is something the Facebook AI Research (FAIR) team has been working on for the last year. This is barely even the tip of the iceberg for companies developing and sharing deep learning tools, solutions and data. If there was any question just how important deep learning is to innovation, Google Brain, Google's Deep Learning branch, has a big surprise. Deep Learning has played a huge role at Google in the recent past, including Voice Search, a project Vanhoucke was involved in, himself.