neural network


Intel Nervana Neural Network Processors (NNP) Redefine AI Silicon - Intel Nervana

#artificialintelligence

As our Intel CEO Brian Krzanich discussed earlier today at Wall Street Journal's D.Live event, Intel will soon be shipping the world's first family of processors designed from the ground up for artificial intelligence (AI): the Intel Nervana Neural Network Processor family (formerly known as "Lake Crest"). This family of processors is over 3 years in the making, and on behalf of the team building it, I'd like to share a bit more insight on the motivation and design behind the world's first neural network processor. Machine Learning and Deep Learning are quickly emerging as the most important computational workloads of our time. These methods allow us extract meaningful insights from data. We've been listening to our customers and applying changes to Intel's silicon portfolio to deliver superior Machine Learning performance.


Intel unveils the Nervana Neural Network Processor

ZDNet

Intel on Tuesday is taking the wraps off of the Nervana Neural Network Processor (NNP), formerly known as "Lake Crest," a chip three years in the making that's designed expressly for AI and deep learning. Along with explaining its unique architecture, Intel announced that Facebook has been a close collaborator as it prepares to bring the Nervana NNP to market. The chipmaker also laid out the beginnings of a product roadmap. While there are platforms available for deep learning applications, this is the first of its kind -- built from the ground up for AI -- that's commercially available, Naveen Rao, corporate VP of Intel's Artificial Intelligence Products Group, told ZDNet. It's rare for Intel to deliver a whole new class of products, he said, so the Nervana NNP family demonstrates Intel's commitment to the AI space.


Apple has a lot to say to Al Franken about Face ID on the iPhone X

Mashable

The iPhone X will change everything when it arrives next month. It'll herald in a brave new notch-filled world with no home buttons and Face ID, a new face-recognition technology that unlocks the phone when you look at it. Mere weeks away from launch and a month after Sen. Al Franken (D-MN) penned a letter to Apple CEO Tim Cook voicing privacy concerns over Face ID, Apple has finally responded to his questions in what's clearly a move to pacify any lingering fears over its new biometric technology. SEE ALSO: Why you'll be forced to buy a case for your iPhone X Apple provided Mashable with a copy of the letter Cynthia Hogan, the company's VP for Public Policy, sent to Sen. Franken. On behalf of Apple, Hogan reiterates how Face ID works using the iPhone X's TrueDepth camera and sensors to scan and analyze a user's face based on depth maps and 2D images it creates.


Intel plans to ship its first-generation Neural Network Processor by the end of the year

#artificialintelligence

Intel's hardware for accelerating AI computation is finally on its way to customers. The company announced today that its first-generation Neural Network Processor, code named "Lake Crest," will be rolling out to a small set of partners in the near future to help them drastically accelerate how much machine learning work they can do. The NNPs are designed to very quickly tackle the math that underpins artificial intelligence applications, specifically neural networks, a currently popular branch of machine learning. One of the big problems with the large, deep neural networks that are popular right now is that they can be very computationally intensive, which makes them harder to test and deploy rapidly. At first, the NNPs will only get released to a small number of intel partners who the company plans to begin outfitting before the end of this year.


Intel unveils new family of AI chips to take on Nvidia's GPUs

#artificialintelligence

When the AI boom came a-knocking, Intel wasn't around to answer the call. Now, the company is attempting to reassert its authority in the silicon business by unveiling a new family of chips designed especially for artificial intelligence: the Intel Nervana Neural Network Processor family, or NNP for short. The NNP family is meant as a response to the needs of machine learning, and is destined for the data center, not your PC. Intel's CPUs may still be a stalwart of server stacks (by some estimates, it has a 96 percent market share in data centers), but the workloads of contemporary AI are much better served by the graphical processors or GPUs coming from firms like Nvidia and ARM. Consequently, demand for these companies' chips has skyrocketed.


Intel Pioneers New Technologies to Advance Artificial Intelligence Intel Newsroom

#artificialintelligence

Today I spoke at the WSJDLive global technology conference about cognitive and artificial intelligence (AI) technology, two nascent areas that I believe will be transformative to the industry and world. These systems also offer tremendous market opportunity and are on a trajectory to reach $46 billion in industry revenue by 20201. At Intel, we're pioneering in these areas with research and investments in hardware, data algorithms and analytics, acquisitions, and technology advancements. As part of this, today we announced that Intel will ship the industry's first silicon for neural network processing, the Intel Nervana Neural Network Processor (NNP), before the end of this year. We are thrilled to have Facebook in close collaboration sharing its technical insights as we bring this new generation of AI hardware to market.


Intel aims to conquer AI with the Nervana processor

#artificialintelligence

Intel enlisted one of the most enthusiastic users of deep learning and artificial intelligence to help out with the chip design. "We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement. On top of social media, Intel is targeting healthcare, automotive and weather, among other applications. Unlike its PC chips, the Nervana NNP is an application-specific integrated circuit (ASIC) that's specially made for both training and executing deep learning algorithms. "The speed and computational efficiency of deep learning can be greatly advanced by ASICs that are customized for ... this workload," writes Intel's VP of AI, Naveen Rao.


?ncid=rss&utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29

#artificialintelligence

This morning at the WSJ's D.Live event, Intel formally unveiled its Nervana Neural Network Processor (NNP) family of chips designed for machine learning use cases. Intel has previously alluded to these chips using the pre-launch name Lake Crest. The technology underlying the chips is heavily tied to Nervana Systems, a deep learning hardware startup Intel purchased last August for $350 million. Intel's NNP chips nix standard cache hierarchy and use software to manage on-chip memory to achieve faster training times for deep learning models. Intel has been scrambling in recent months to avoid being completely leveled by Nvidia.


How is deep learning affecting sports?

#artificialintelligence

I don't know about you, but I was not the most athletic kid growing up. It took me forever to make a jump shot well. When I started playing golf after college my short game was an absolute disaster. I always had a hard time visualising what I needed to do differently. Having a coach tell me what to do never seemed to do the trick.


TensorFlow* Optimizations on Modern Intel Architecture

@machinelearnbot

TensorFlow* is a leading deep learning and machine learning framework, which makes it important for Intel and Google to ensure that it is able to extract maximum performance from Intel's hardware offering. This paper introduces the Artificial Intelligence (AI) community to TensorFlow optimizations on Intel Xeon and Intel Xeon Phi processor-based platforms. These optimizations are the fruit of a close collaboration between Intel and Google engineers announced last year by Intel's Diane Bryant and Google's Diane Green at the first Intel AI Day. We describe the various performance challenges that we encountered during this optimization exercise and the solutions adopted. We also report out performance improvements on a sample of common neural networks models.