Results


The Rise of AI Is Forcing Google and Microsoft to Become Chipmakers

#artificialintelligence

At a computer vision conference in Hawaii, Harry Shum, who leads Microsoft's research efforts, showed off a new chip created for the HoloLens augmented reality googles. The chip, which Shum demonstrated tracking hand movements, includes a module custom-designed to efficiently run the deep learning software behind recent strides in speech and image recognition. The TPU, for tensor processing unit, was created to make deep learning more efficient inside the company's cloud. The world's largest chip maker is building a chip just for machine learning as the biggest tech companies look to an AI-powered future.


Google brings 45 teraflops tensor flow processors to its compute cloud

#artificialintelligence

Google has developed its second-generation tensor processor--four 45-teraflops chips packed onto a 180 TFLOPS tensor processor unit (TPU) module, to be used for machine learning and artificial intelligence--and the company is bringing it to the cloud. Each card has its own high-speed interconnects, and 64 of the cards can be linked into what Google calls a pod, with 11.5 petaflops total; one petaflops is 1015 floating point operations per second. The GPUs can typically also operate in double-precision mode (64-bit numbers) and half-precision mode (16-bit numbers). But as a couple of points of comparison: AMD's forthcoming Vega GPU should offer 13 TFLOPS of single precision, 25 TFLOPS of half-precision performance, and the machine-learning accelerators that Nvidia announced recently--the Volta GPU-based Tesla V100--can offer 15 TFLOPS single precision and 120 TFLOPS for "deep learning" workloads.


Google's Second AI Chip Crashes Nvidia's Party

#artificialintelligence

On Wednesday at its annual developers conference, the tech giant announced the second generation of its custom chip, the Tensor Processing Unit, optimized to run its deep learning algorithms. In contrast, Nvidia announced its latest generation GPUs in a data center product called the Tesla V100 that deliver 120 teraflops of performance, Nvidia said. Through the Google Cloud, anybody can rent out Cloud TPUs -- similar to how people can rent GPUs on the Google Cloud. "Google's use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs," said tech analyst Patrick Moorhead.


Nvidia CEO: Software Is Eating the World, but AI Is Going to Eat Software

#artificialintelligence

Nvidia has benefitted from a rapid explosion of investment in machine learning from tech companies. Can this rapid growth in the use cases for machine learning continue? Recent research results from applying machine learning to diagnosis are impressive (see "An AI Ophthalmologist Shows How Machine Learning May Transform Medicine"). Your chips are already driving some cars: all Tesla vehicles now use Nvidia's Drive PX 2 computer to power the Autopilot feature that automates highway driving.


Bosch and Nvidia create an AI supercomputer for self-driving tech

#artificialintelligence

The AI onboard computer is expected to guide self-driving cars through even complex traffic situations, or ones that are new to the car. "Automated driving makes roads safer, and artificial intelligence is the key to making that happen. Driverless cars to be part of everyday life in the next decade Bosch's AI onboard computer can recognize pedestrians or cyclists. As a result, a self-driving car with AI can recognize and assess complex traffic situations, such as when an oncoming vehicle executes a turn, and factor these into its own driving.


Chip Magic

#artificialintelligence

Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel (NASDAQ:INTC), AMD (NASDAQ:AMD), Nvidia (NASDAQ:NVDA), Qualcomm (NASDAQ:QCOM), Micron (NASDAQ:MU) and ARM, as well as internal efforts from companies like Apple (NASDAQ:AAPL), Samsung (OTC:SSNLF), Huawei, Google (NASDAQ:GOOG) (NASDAQ:GOOGL) and Microsoft (NASDAQ:MSFT). These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized "accelerators" that are optimized to do specific tasks well. For example, even in the traditional CPU world, AMD's new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company's previous chips. Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip.


NVIDIA's Jetson TX2 Takes Machine Learning To The Edge

Forbes

The Jetson boards are siblings to NVIDIA's Drive PX boards for autonomous driving and the TX2 shares the same Tegra "Parker" silicon as the Drive PX2. There are many synergies between the two families as both can be used to add local machine learning to transportation. One of my favorite products on display using the Jetson board is a portable handheld 3D scanner from Artec. Key advantages of the Jetson TX2 over the original TX1 are that it adds two additional, higher performing Denver CPU cores to the four Cortex-A57 cores in the TX1, NVIDIA's latest Pascal GPU, and it offers twice the memory capacity and bandwidth.


When Will Self-Driving Cars Be Rolled-Out? Carmakers, Suppliers Disagree

#artificialintelligence

Chief Executive Jen-Hsun Huang forecast carmakers may speed up their plans in the light of technological advances and that fully self-driving cars could be on the road by 2025. "Of course, we still have to prove that an autonomous car does better in driving and has less accidents than a human being," Bosch CEO Volkmar Denner told a news conference. Level three means drivers can turn away in well-understood environments such as motorway driving but must be ready to take back control, while level four means the automated system can control the vehicle in most environments. But Nvidia's Huang said he expected to have chips available for level three automated driving by the end of this year and in customers' cars on the road by the end of 2018, with level four chips following the same pattern a year later.


Flipboard on Flipboard

#artificialintelligence

The AI onboard computer is expected to guide self-driving cars through even complex traffic situations, or ones that are new to the car. "Automated driving makes roads safer, and artificial intelligence is the key to making that happen. Driverless cars to be part of everyday life in the next decade Bosch's AI onboard computer can recognize pedestrians or cyclists. As a result, a self-driving car with AI can recognize and assess complex traffic situations, such as when an oncoming vehicle executes a turn, and factor these into its own driving.


Bosch and Nvidia partner to develop AI for self-driving cars

Robohub

Amongst all the activity in autonomously driven vehicle joint ventures, new R&D facilities, strategic acquisitions (such as Mobileye being acquired by Intel) and booming startup fundings, two big players in the industry, NVIDIA and Bosch, are partnering to develop an AI self-driving car supercomputer. "Automated driving makes roads safer, and artificial intelligence is the key to making that happen," said Denner. The Bosch AI car computer will use NVIDIA AI PX technology, the upcoming AI car superchip, advertised as the world's first single-chip processor designed to achieve Level-4 autonomous driving (see ADAS chart). "Using DRIVE PX AI car computer, Bosch will build automotive-grade systems for the mass production of autonomous cars.