Heart of the Tesla's new autonomous driving hardware, that some day will enable fully self-driving cars, is the latest NVIDIA DRIVE PX 2 AI computing platform (see live presentation of it in action below). NVIDIA DRIVE PX 2 is the open AI car computing platform that enables automakers and their tier 1 suppliers to accelerate production of automated and autonomous vehicles. For NVIDIA, DRIVE PX 2 is now in full production as Tesla requires thousands of units each month for manufacturing of the Model S and Model X, soon that number could be tens of thousands per month when the Model 3 assembly starts later next year. Tesla Motors has announced that all Tesla vehicles -- Model S, Model X, and the upcoming Model 3 -- will now be equipped with an on-board "supercomputer" that can provide full self-driving capability. The computer delivers more than 40 times the processing power of the previous system.
Every technological gadget and machines rely on chips to perform its operations and activities. One of the leading company, Nvidia has recently announced the launch of Tesla P100, a data centre accelerator of 15 billion transistor chip. It is specifically designed for deep learning AI technology. Nvidia has made the announcement regarding Tesla P100 at the GPU conference held in San Jose, California. Jen-Hsun Huang, the CEO has asserted that Tesla P100 is the world's largest chip till date with 15 billion transistors on a single chip.
Over the last few months we have seen NVIDIA's Pascal GPUs roll out among their consumer cards, and now the time has come for the Tesla line to get its own Pascal update. To that end, at today's GTC Beijing 2016 keynote, NVIDIA CEO Jen-Hsun Huang has announced the next generation of NVIDIA's neural network inferencing cards, the Tesla P40 and Tesla P4. These cards are the direct successor to the current Tesla M40 and M4 products, and with the addition of the Pascal architecture, NVIDIA is promising a major leap in inferencing performance. We've covered NVIDIA's presence in and plans for the deep learning market for some time now. Overall the deep learning market is a rapidly growing market, and one that has proven very successful for NVIDIA as the underlying neural networks map well to their GPU architectures.
"We need to find a path forward for life after Moore's Law," Nvidia CEO Jen-Hsun Huang said at the beginning of his annual GPU Technology Conference keynote. But Nvidia isn't hesitant to throw around more iron to make its ferocious graphics processors even more so, as evidenced by the reveal of the first product based on Nvidia's badass next-gen Volta GPU. Nvidia's high-end "Pascal" processors still rule the graphics roost, though AMD's rival Radeon Vega GPUs are scheduled to launch before the end of June. Volta helps Nvidia take some of the wind out of AMD's sails before Vega even hits the streets, even if the Tesla V100 GPU is focused on data centers. This beastly GPU--both in size and capabilities--boasts a whopping 21 billion transistors and 5,120 CUDA cores humming along at 1,455MHz boost clock speeds, all built using a 12-nanometer manufacturing process more advanced than that of Nvidia's current GPUs.
Switch chips have a very long technical and economic lives, considerably longer than that of a Xeon processor used in a server – something on the order of seven or eight years compared to three or four. As it turns out, the various GPUs used in Nvidia's Tesla accelerators look like they, too, will have very long technical and economic lives. Even after a new technology is introduced, sometimes the old one can be had at a much cheaper price and therefore continues to be a good price/performer even after it has been presumably obsoleted by an improved product. Its economic life outlives its technical life in this regard, and it is not obsoleted instantly (often to the frustration of its creator, which wants to sell the shiny new stuff). But sometimes this is in fact the plan.