Results


Intel's Nervana chips will bring AI to more parts of your life

#artificialintelligence

Intel CEO Brian Krzanich speaks at a 2016 AI event. Intel might be an old-school computing company, but the chipmaker thinks the latest trends in artificial intelligence will keep it an important part of your high-tech life. AI technology called machine learning today is instrumental to taking good photos, translating languages, recognizing your friends on Facebook, delivering search results, screening out spam and many other chores. It usually uses an approach called neural networks that works something like a human brain, not a sequence of if-this-then-that steps as in traditional computing. Lots of companies, including Apple, Google, Qualcomm and Nvidia, are designing chips to accelerate this sort of work.


Nvidia's new supercomputer is designed to drive fully autonomous vehicles

Mashable

Nvidia wants to make it easier for automotive companies to build self-driving cars, so it's releasing a brand new supercomputer designed to drive them. The chipmaker claims its new supercomputer is the world's first artificial intelligence computer designed for "Level 5" autonomy, which means vehicles that can operate themselves without any human intervention. The new computer will be part of Nvidia's existing Drive PX platform, which the GPU-maker offers to automotive companies in order to provide the processing power for their self-driving car systems. Huang announced Nvidia will soon release a new software development kit (SDK), Drive IX, that will help developers to build new AI-partner programs to improve in-car experience.


The risk-taker pushing Intel into the new world of artificial intelligence

Los Angeles Times

What sets Rao apart from others attempting the same thing is the fact that Intel last year bought his San Diego company, Nervana, for $400 million. The Google cat project in 2012 was carried out on 16,000 Intel central processors inside Google's vast "farms" of computer servers. When Nvidia found out hedge funds and others were using its chips for deep learning, it made a quick strategic move: tailor its chips and develop software tools to support neural networks. In 2016, it acquired Movidius, a Silicon Valley company that specializes in making smart vision chips for consumer devices, including drones.


Why a 24-Year-Old Chipmaker Is One of Tech's Hot Prospects

#artificialintelligence

The chips are made by the Silicon Valley company Nvidia. Health care applications like the one CTA is pioneering are among Nvidia's many new targets. The company's chips -- known as graphics processing units, or GPUs -- are finding homes in drones, robots, self-driving cars, servers, supercomputers and virtual-reality gear.


Where Major Chip Companies Are Investing In AI, AR/VR, And IoT

#artificialintelligence

We dug into the private market bets made by major computer chip companies, including GPU makers. Our analysis encompasses the venture arms of NVIDIA, Intel, Samsung, AMD, and more. Meanwhile, the vast application of graphics hardware in AI has propelled GPU (graphics processing unit) maker NVIDIA into tech juggernaut status: the company's shares were the best-performing stock over the past year. Also included in the analysis are 7 chip companies we identified as active in private markets, including NVIDIA, AMD, and ARM.


the-rise-of-ai-is-forcing-google-and-microsoft-to-become-chipmakers

WIRED

While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there's a less obvious scramble going on to build a new breed of computer chip needed to power our AI future. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft's research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company's cloud.


Nvidia Embraces Deep Neural Nets With Volta Chips

#artificialintelligence

To accomplish this, the company is investing in Deep Learning Institute, a training program to spread the deep learning neural net programming model to a new class of developers. The Volta GPU processor already has more cores and processing power than the fastest Pascal GPU, but in addition, the tensor core pushes the DNN performance even further. The V100 can perform 120 Tera FLOPS of tensor math using 640 tensor cores. This will make Volta very fast for deep neural net training and inference.


Battle to Provide Chips for the AI Boom Heats Up

MIT Technology Review

Nvidia's profits and stock have surged over the past few years because the graphics processors it invented to power gaming and graphics production have enabled many recent breakthroughs in machine learning (see "10 Breakthrough Technologies 2013: Deep Learning"). As the AI market has grown, Nvidia has tweaked its chip designs with features to support neural networks. For example, Intel promises to release a chip for deep learning later this year built on top of technology acquired with startup Nervana in 2016 (see "Intel Outside as Other Companies Prosper from Graphics Chips"). Microsoft has invested heavily in using FPGAs to power its machine-learning software and made them a core piece of its cloud platform, Azure.


NVIDIA's first Volta-powered GPU sits in a $149k supercomputer

Engadget

Today at its GPU Technology Conference, the company announced the NVIDIA Tesla V100 data center GPU, the first processor to use its seventh-generation architecture. Like the Tesla P100 the processor is replacing, the Volta-powered GPU is designed specifically to power artificial intelligence and deep learning so, naturally, it's flush with power. NVIDIA also announced what it's calling a "personal AI supercomputer" called the DGX Station. The company's new GPU Cloud Deep Learning Stack promises to let AI developers offload their neural network machine learning tasks to an online catalog of integrated and optimized deep learning frameworks running on Titan Xp, GTX 1080 Ti or DGX systems in the cloud.


Nvidia's new Volta-based DGX-1 supercomputer puts 400 servers in a box

PCWorld

The GPU, the first one based on the brand-new Volta architecture, was introduced at the company's GPU Technology Conference in San Jose, California, on Wednesday. The new supercomputer 40,960 CUDA cores, which Nvidia says equals the computing power of 800 CPUs. The Tesla V100 in the DGX-1 is five times faster than the current Pascal architecture, Huang said. Nvidia has also included a cube-like Tensor Core, which will work with the regular processing cores to improve deep learning.