Intel Talks at Hot Chips gear up for "AI Everywhere" - insideHPC

#artificialintelligence

Naveen Rao is vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation. Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. To get to a future state of'AI everywhere,' we'll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it's collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications.


Back to the Edge: AI Will Force Distributed Intelligence Everywhere

#artificialintelligence

Other major firms are following suit. Microsoft has announced dedicated silicon hardware to accelerate deep-learning in its Azure cloud. And in July, the firm also revealed that its augmented reality headset, the Hololens, will have a customized chip in it to optimize machine learning applications. Apple has a long track-record of designing its own silicon for specialist requirements. Earlier this year Apple ended a relationship with Imagination Technologies, a firm that has been providing designs for GPUs in iPhones, in favor of its own GPU designs.


Evolving Moore's Law with chiplets and 3D packaging

#artificialintelligence

For more than 50 years, Moore's Law has paced the advance of electronics, from semiconductor chips to laptops and cell phones. Now, the golden rule of technology price and performance from Intel co-founder Gordon Moore is evolving once again. Demanding modern workloads like AI often need specialized, high-powered processors with unique requirements. So Intel and other leading-edge chipmakers are turning to innovative new chip design and packaging techniques – and re-writing the rules of digital innovation for a new era. Ramune Nagisetty is Director of Process and Product Integration for Intel Technology Development.


Making Chips Smarter

Communications of the ACM

It is no secret that artificial intelligence (AI) and machine learning have advanced radically over the last decade, yet somewhere between better algorithms and faster processors lies the increasingly important task of engineering systems for maximum performance--and producing better results. The problem for now, says Nidhi Chappell, director of machine learning in the Datacenter Group at Intel, is that "AI experts spend far too much time preprocessing code and data, iterating on models and parameters, waiting for training to converge, and experimenting with deployment models. Each step along the way is either too labor-and/or compute-intensive." The research and development community--spearheaded by companies such as Nvidia, Microsoft, Baidu, Google, Facebook, Amazon, and Intel--is now taking direct aim at the challenge. Teams are experimenting, developing, and even implementing new chip designs, interconnects, and systems to boldly go where AI, deep learning, and machine learning have not gone before.


The AI arms race spawns new hardware architectures

#artificialintelligence

As society turns to artificial intelligence to solve problems across ever more domains, we're seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption. Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we've seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years. Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning. But traditional computers are not optimized for neural network operations.