Goto

Collaborating Authors

 vpu




R1, R2, R4: Suggest more extensive analysis on Assumption 2 and the normalization step in Algorithm 1

Neural Information Processing Systems

We would like to thank the reviewers for their insightful feedback. In the following, we address their key concerns. Following reviewers' suggestions, we will add more thorough analysis in the final paper. Its advantages and applications are then limited. Mixup was introduced in VPU as a regularizer to solve the overfitting problem (Table 4 and Lines 100-105, 376-384).


Neural Architecture Search for Intel Movidius VPU

Xu, Qian, Li, Victor, S, Crews Darren

arXiv.org Artificial Intelligence

Intel Movidius VPU enable demanding computer vision and AI workloads with efficiency. By coupling highly parallel programmable compute with workload-specific AI hardware acceleration in a unique architecture that minimizes data movement, Movidius VPUs achieve a balance of power efficiency, and compute performance. But the AI models from customers are usually generally built and not designed for a specific hardware as fig.1 left shows. Due to the different designs of various AI accelerators, general models can't fully utilize hardware's capability. That gives the chance to design better models for hardware: higher fps at same accuracy level or higher accuracy at same fps. However, even for hardware specialists, the design space of possible networks is still extremely large and impossible for handcrafting.


Intel's futuristic Meteor Lake CPUs will focus on 'core AI capabilities'

PCWorld

Intel executives confirmed Tuesday that the company will make a concerted push to bring AI capabilities to PCs with its next next-generation CPU cores, code-named Meteor Lake. Both Intel chief executive Pat Gelsinger and an Intel fellow, Rajshree Chabukswar, confirmed that there will be AI capabilities arriving with Meteor Lake, which will probably be named the 14th-gen Core chip. Intel formally revealed the desktop enthusiast version of its 13th-gen Core chip (Raptor Lake) this week at its Intel Innovation conference in San Jose. AI has proven to be a powerful business opportunity within the enterprise, with seemingly the majority of enterprise chip announcements emphasizing their inferencing ability. AI hardware can help train algorithms to assist with visual recognition, predictive capabilities, and more.


FLIR Systems Announces Industry-First Deep Learning-Enabled Camera Family

#artificialintelligence

WILSONVILLE, Ore.--(BUSINESS WIRE)--FLIR Systems, Inc. (NASDAQ: FLIR) today announced the FLIR Firefly camera family, the industry's first deep learning inference-enabled machine vision camera. The FLIR Firefly, which integrates the Intel Movidius Myriad 2 Vision Processing Unit (VPU), is designed for image analysis professionals using deep learning for more accurate decisions, and faster, easier system development. Traditional rules-based software is ideal for straightforward tasks such as barcode reading or checking a manufactured part against specifications. The FLIR Firefly combines a new, affordable machine vision platform with the power of deep learning to address complex and subjective problems such as recognizing faces or classifying the quality of a solar panel. The FLIR Firefly leverages the Intel Movidius Myriad 2 VPU's advanced capabilities in a compact and low-power camera, ideal for embedded and handheld systems.


Optimized chips push machine, deep learning to new heights - asmag.com

#artificialintelligence

The tech world's obsession with artificial intelligence is driving companies to develop better, more optimized solutions for running machine learning and deep learning algorithms. The latest chips are not only making AI more available to various industries, they are also driving better efficiency and increased accuracy. When it comes to artificial intelligence (AI), 2018 is looking to be a year of significant growth. This is largely due to big steps being made in machine learning and deep learning. The deep learning market alone is expected to be worth US$1.7 billion by 2022, growing at a compound annual growth rate (CAGR) of 65.3 percent during the forecast period 2016 and 2022, according to a report by market research firm MarketsandMarkets. The report cites the major factors driving growth as the robust R&D for the development of better processing hardware and increasing adoption of cloud-based technology for deep learning.


With Windows ML, Intel AI to Invade Mobile PCs EE Times

#artificialintelligence

It might not be too long before your average mobile PC will feature -- on its motherboard -- not just CPUs and GPUs but also an embedded AI inference chip, like the Intel/Movidius Vision Processor Unit (VPU). The first clue for this scenario unfolded in Microsoft Corp.'s launch announcement today, at its Windows Developer Day, of Windows ML, an open-standard framework for machine-learning tasks in the Windows OS. Microsoft said that it is extending Windows OS native support for the Intel/Movidius VPU. Implied in the message is that Intel/Movidius has taken a step closer to finding a home not just in embedded applications, such as drones and surveillance cameras, but also in Windows-based laptops and tablets. In a telephone interview with EE Times, Gary Brown, director of marketing at Movidius/Intel, confirmed, "Although today's announcement isn't about that [VPU integration on a mobile PC], yes, you will see VPU migrating into a PC motherboard."


Future Windows devices may come with dedicated AI processor - MSPoweruser

#artificialintelligence

During the Windows Developer Day event yesterday, Microsoft revealed the Windows AI platform which will allow developers to build intelligent apps on Windows 10. One of the highlighted features of WinML APIs is the support for pre-trained machine learning models. Windows ML will efficiently use hardware for any given artificial intelligence (AI) workload and intelligently distributes work across multiple hardware types including CPU, GPU and Intel's Vision Processing Units (VPU). The Intel VPU is a purpose-built chip for accelerating AI workloads on client devices. Myriad X is world's first system-on-chip (SOC) shipping with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge.


Introducing Myriad X: Unleashing AI at the Edge Intel Newsroom

#artificialintelligence

Throughout my career, and now more than ever at Intel, I have dreamed about where technology will take us next, and it's even more exciting to be creating the future. Today, that future is here with the unveiling of the Myriad X, the world's first vision processing unit (VPU) to ship with a dedicated Neural Compute Engine to deliver artificial intelligence (AI) capabilities to the edge in an incredibly low-power, high-performance package. In the coming years, we'll see a huge range of new products emerge that are made more autonomous by embedding real-time intelligence capabilities in devices – from drones and smart cameras to augmented reality and more – to give them the ability to see, understand, interact with and learn from rapidly changing environments. Myriad X combines dedicated imaging, computer vision processing and – thanks to the industry-first Neural Compute Engine – high-performance deep learning inference within the same chip, and the results are opening up new realms of possibility. With this faster, more pervasive intelligence embedded directly into devices, the potential to make our world safer, more productive and more personal is limitless.