Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.
For months now, major companies have been hooking up--Uber and Daimler, Lyft and General Motors, Microsoft and Volvo--but Intel CEO Brian Krzanich's announcement on Monday that the giant chipmaker is helping Waymo, Google's self-driving car project, build robocar technology registers as some seriously juicy gossip. Krzanich said Monday that Waymo's newest self-driving Chrysler Pacificas, delivered last December, use Intel technology to process what's going on around them and make safe decisions in real time. And last year, Google announced it had created its own specialized chip that could help AVs recognize common driving situations and react efficiently and safely. "Our self-driving cars require the highest-performance compute to make safe driving decisions in real-time," Waymo CEO John Krafcik said in a statement.
While most attention to the AI boom is understandably focused on the latest exploits of algorithms beating humans at poker or piloting juggernauts, there's a less obvious scramble going on to build a new breed of computer chip needed to power our AI future. At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft's research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company's cloud.
He and several other researchers designed this chip to run deep neural networks--complex mathematical systems that can learn tasks on their own by analyzing vast amounts of data--but ANNA never reached the mass market. Neural networks can run faster and consume less power when paired with chips specifically designed to handle the massive array of mathematical calculations these AI systems require. And more recently, Qualcomm has started building chips specifically for executing neural networks, according to LeCun, who is familiar with Qualcomm's plans because Facebook is helping the chip maker develop technologies related to machine learning. As Facebook explained last week in unveiling its new augmented reality tools, this kind of technology requires neural networks that can recognize the world around you.
Over the past decade, the company has designed all sorts of new hardware for the massive data centers that underpin its myriad online services, including computer servers, networking gear, and more. At the same time, as more and more businesses adopt the cloud computing services offered Google, they'll be buying fewer and fewer servers (and thus chips) of their own, eating even further into the chip market. That's because it helps run TensorFlow, the software engine that drives the Google's deep neural networks, networks of hardware and software that can learn particular tasks by analyzing vast amounts of data. Other tech giants typically run their deep neural nets with graphics processing units, or GPUs--chips that were originally designed to render images for games and other graphics-heavy applications.