Results


Intel unveils new family of AI chips to take on Nvidia's GPUs

#artificialintelligence

When the AI boom came a-knocking, Intel wasn't around to answer the call. Now, the company is attempting to reassert its authority in the silicon business by unveiling a new family of chips designed especially for artificial intelligence: the Intel Nervana Neural Network Processor family, or NNP for short. The NNP family is meant as a response to the needs of machine learning, and is destined for the data center, not your PC. Intel's CPUs may still be a stalwart of server stacks (by some estimates, it has a 96 percent market share in data centers), but the workloads of contemporary AI are much better served by the graphical processors or GPUs coming from firms like Nvidia and ARM. Consequently, demand for these companies' chips has skyrocketed.


Intel aims to conquer AI with the Nervana processor

#artificialintelligence

Intel enlisted one of the most enthusiastic users of deep learning and artificial intelligence to help out with the chip design. "We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement. On top of social media, Intel is targeting healthcare, automotive and weather, among other applications. Unlike its PC chips, the Nervana NNP is an application-specific integrated circuit (ASIC) that's specially made for both training and executing deep learning algorithms. "The speed and computational efficiency of deep learning can be greatly advanced by ASICs that are customized for ... this workload," writes Intel's VP of AI, Naveen Rao.


Experts say we Need to Start Over to Build True Artificial Intelligence

#artificialintelligence

The issue lies with a prevalent tactic in AI development called "back propagation". Geoffrey Hinton has been called the "Godfather of Deep Learning". It relates directly to how AIs learn and store information. Since its conception, back propagation algorithms have become the "workhorses" of the majority of AI projects.


The Morning After: Wednesday. October 11th 2017

Engadget

NVIDIA, a company best-known for its graphics cards, is making computers for self-driving cars, the company behind Overwatch has something new in the works and the Tamagotchi is back -- for some reason. Google Home Mini bug could make it record audio 24/7 Voice-controlled appliances need to listen in so they can pick up their hotword, but Android Police received a test Google Home Mini that went a little too far. NVIDIA's first AI computer, the NVIDIA Drive PX Pegasus, is apparently capable of level five autonomy -- far beyond the level two and three vehicles we're only just starting to see. Palette's Lego-like controls can make you a faster video editor Until robots take over video editing, you'll still have to fiddle with cuts, colors and sound levels.


AtScale 6.0 and Kinetica 6.1 announced; SAP gets NVIDIA GPU religion

ZDNet

AtScale builds virtual (non-materialized) OLAP (online analytical processing) cubes over data in Hadoop, an approach which meshes nicely with front-end BI tools like Tableau which were designed for such models and repositories. But as it does so, people are increasingly understanding that federating that data with their more conventional database engines, including MPP (massively parallel processing) data warehouses, is imperative. Well, today's round of news includes a non-Tableau related item: NVIDIA GPUs are now finding their way into SAP data centers and, by extension, its cloud services too. Leonardo Machine Learning Foundation services -- including SAP Brand Impact, which automatically analyzes large volumes of videos to detect brand logos in moving images (and, by extension, ROI on product placements), and SAP Service Ticket Intelligence, which categorizes service tickets and provides resolution recommendations for the service center agent -- will feature NVIDIA Volta-trained models behind the scenes.


Google's dedicated TensorFlow processor, or TPU, crushes Intel, Nvidia in inference workloads - ExtremeTech

@machinelearnbot

First, Turbo mode and GPU Boost were disabled for both the Haswell and Nvidia GPUs, not to artificially tilt the score in favor of the TPU, but because Google's data centers prioritize dense hardware packing over raw performance. As for Nvidia's K80, the test server in question deployed four K80 cards with two GPUs per card, for a total of eight GPU cores. Packed that tightly, the only way to take advantage of the GPU's boost clock without causing an overheat would have been to remove two of the K80 cards. Since the clock frequency increase isn't nearly as potent as doubling the total number of GPUs in the server, Google leaves boost disabled on these server configurations.


Will machine learning save the enterprise server business?

#artificialintelligence

Neural networks apply computational resources to solve machine learning linear algebra problems with very large matrices, iterating to make statistically accurate decisions. Most of the machine learning models in operation today started in academia, such as natural language or image recognition, and were further researched by large well-staffed research and engineering teams at Google, Facebook, IBM and Microsoft. Enterprise machine learning experts and data scientists will have to start from scratch with research and iterate to build new high-accuracy models. It is a specialty business because the enterprises need four characteristics not necessarily found together: a large corpus of data for training, highly skilled data scientists and machine learning experts, a strategic problem that machine learning can solve, and a reason not to use Google's or Amazon's pay-as-you-go offerings.


Despite the hype, nobody is beating Nvidia in AI

#artificialintelligence

Investors say this isn't even the top for Nvidia: William Stein at SunTrust Robinson Humphrey predicts Nvidia's revenue from selling server-grade GPUs to internet companies, which doubled last year, will continue to increase 61% annually until 2020. The most well-known of these next-generation chips is Google's Tensor Processing Unit (TPU), which the company claims is 15-30 times faster than others' central processing units (CPUs) and GPUs. Even disregarding the market advantage of capturing a strong initial customer base, Wang notes that the company is also continuing to increase the efficiency of GPU architecture at a rate fast enough to be competitive with new challengers. It currently supports every major machine-learning framework; Intel supports four, AMD supports two, Qualcomm supports two, and Google supports only Google's.


To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

#artificialintelligence

Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.


To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

WIRED

Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.