Goto

Collaborating Authors

No cloud required: Why AI's future is at the edge - SiliconANGLE

#artificialintelligence

For all the promise and peril of artificial intelligence, there's one big obstacle to its seemingly relentless march: The algorithms for running AI applications have been so big and complex that they've required processing on powerful machines in the cloud and data centers, making a wide swath of applications less useful on smartphones and other "edge" devices. Now, that concern is quickly melting away, thanks to a series of breakthroughs in recent months in software, hardware and energy technologies that are rapidly coming to market. That's likely to drive AI-driven products and services even further away from a dependence on powerful cloud-computing services and enable them to move into every part of our lives -- even inside our bodies. In turn, that could finally usher in what the consulting firm Deloitte late last year called "pervasive intelligence," shaking up industries in coming years as AI services become ubiquitous. By 2022, 80% of smartphones shipped will have AI capabilities on the device itself, up from 10% in 2017, according to market researcher Gartner Inc.


As AI moves to the chip, mobile devices are about to get much smarter

#artificialintelligence

The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it's about to injects smarts into every other object imaginable. That's because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that's likely to lead to much smarter phones, drones, robots, cameras, wearables and more. "Consumers will be genuinely amazed at the capabilities of these devices," said Cormac Brick, vice president of machine learning for Movidius Ltd., a maker of vision processor chips in San Mateo, Calif.


CEVA's 2nd Generation Neural Network Software Framework Extends Support for Artificial Intelligence Including Google's TensorFlow

#artificialintelligence

CVPR 2016 -- CEVA, Inc. (NASDAQ: CEVA), the leading licensor of signal processing IP for smarter, connected devices, today introduced CDNN2 (CEVA Deep Neural Network), its second generation neural network software framework for machine learning. CDNN2 enables localized, deep learning-based video analytics on camera devices in real time. This significantly reduces data bandwidth and storage compared to running such analytics in the cloud, while lowering latency and increasing privacy. Coupled with the CEVA-XM4 intelligent vision processor, CDNN2 offers significant time-to-market and power advantages for implementing machine learning in embedded systems for smartphones, advanced driver assistance systems (ADAS), surveillance equipment, drones, robots and other camera-enabled smart devices. CDNN2 builds on the successful foundations of CEVA's first generation neural network software framework (CDNN), which is already in design with multiple customers and partners.


MicroSys Partners with AI Chipmaker Hailo on Embedded AI Platform - insideHPC

#artificialintelligence

Munich and Tel Aviv, September 30, 2021 – MicroSys Electronics announced today its partnership with artificial intelligence chipmaker Hailo to launch its miriac AIP-LX2160A embedded platform, hosting up to 5 integrated Hailo-8 AI accelerator modules. The new edge server-grade AI solution enables high-performance and scalable AI inference capabilities at the edge. The new, application-ready AI platform offers industries a high bandwidth and power-efficient solution at the edge for a range of applications in Industry 4.0, such as automotive and heavy machinery. Powered by the NXP QorIQ Layerscape LX2160A high-throughput processor technology, the miriac AIP-LX2160A can integrate multiple advanced Hailo-8 AI accelerators and offers best-in-class processing performance and deep learning capabilities of up to 130 tera-operations per second (TOPS). The combined solution delivers exceptional AI computing performance across multiple standard NN benchmarks, including over 6000 Frames Per Second (FPS) on Resnet-50, over 5000 FPS on Mobilenet-V1 SSD and close to 1000 FPS on YOLOv5m.


EETimes - Let's Talk Edge Intelligence -

#artificialintelligence

When new industry buzzwords or phrases come up, the challenge for people like us who write about the topic is figuring out what exactly a company means, especially when it uses the phrase to fit its own marketing objective. The latest one is edge artificial intelligence or edge AI. Because of the proliferation of internet of things (IoT) and the ability to add a fair amount of compute power or processing to enable intelligence within those devices, the'edge' can be quite wide, and could mean anything from the'edge of a gateway' to an'endpoint'. So, we decided to find out if there was consensus in the industry on the definition of edge vs. endpoint, who would want to add edge AI, and how much'smartness' you could add to the edge. First of all, what is the difference between edge and endpoint? Well it depends on your viewpoint -- anything not in the cloud could be defined as edge. Probably the clearest definition was from Wolfgang Furtner, Infineon Technologies' senior principal for concept and system engineering.