Goto

Collaborating Authors

AI processors go mobile

ZDNet

At its iPhone X event last week, Apple devoted a lot of time to the A11 processor's new neural engine that powers facial recognition and other features. The week before, at IFA in Berlin, Huawei announced its latest flagship processor, the Kirin 970, equipped with a Neural Processing Unit capable of processing images 20 times faster than the CPU alone. The sudden interest in neural engines is driven by the rise of deep learning. These specialized processors are designed specifically to crunch the complex algorithms used in artificial neural networks faster and more efficiently than general-purpose CPUs. This trend is already having a profound impact in the data center.


Vision and neural nets drive demand for more powerful chips

ZDNet

New applications are driving demand for faster and more efficient vision processing. The Hot Chips conference, now in its 28th year, is known for announcements of "big iron" such as the Power and SPARC chips behind some of the world's fastest systems. But these days the demand for processing power is coming from new places. One of the big ones is vision processing, driven by the proliferation of cameras; new applications in cars, phones and all sorts of "things;" and the rapid progress in neural networks for object recognition. All of this takes a lot of horsepower, and at this week's conference, several companies talked about different ways to tackle it.


A List of Chip/IP for Deep Learning

#artificialintelligence

Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world's top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know. The companies and products in the list are organized into five categories as shown in the following table. Intel purchased Nervana Systems who was developing both a GPU/software approach in addition to their Nervana Engine ASIC. Intel is also planning in integrating into the Phi platform via a Knights Crest project.


Scaling up vision and AI performance

#artificialintelligence

Demand is growing for faster processor architectures to support embedded vision and artificial intelligence. With the demand for image sensors growing rapidly and new opportunities emerging in the mobile, virtual reality (VR), automotive and surveillance markets, demand for applications that are able to mix vision and artificial intelligence (AI) is surging. "We are seeing work on a range of future applications from phones that automatically identify the user, to autonomous cars that are able to recognise an individual's driving style. But whatever the application, all of them are looking at vision sensors that use AI to make decisions," says Pulin Desai, Product Marketing Director for Cadence's Tensilica Vision DSP Product Line. "Each of them brings with them challenges for the design engineer.


basicmi/AI-Chip

#artificialintelligence

At Hot Chips 2019, Intel revealed new details of upcoming high-performance artificial intelligence (AI) accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. Myriad X is the first VPU to feature the Neural Compute Engine - a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver industry leading performance per Watt without encountering common data flow bottlenecks encountered by other architectures. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), announced that it is bringing the Company's artificial intelligence (AI) expertise to the cloud with the Qualcomm Cloud AI 100. Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilizes the Company's heritage in advanced signal processing and power efficiency. Our 4th generation on-device AI engine is the ultimate personal assistant for camera, voice, XR and gaming – delivering smarter, faster and more secure experiences. Utilizing all cores, it packs 3 times the power of its predecessor for stellar on-device AI capabilities. With the open-source release of NVDLA's optimizing compiler on GitHub, system architects and software teams now have a starting point with the complete source for the world's first fully open software and hardware inference platform. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch.