Low-latency HD Inference - a New Treatment for Myo... - Community Forums
This is a guest post from Quenton Hall, AI System Architect for Industrial, Scientific and Medical applications. One of the AI demo highlights at XDF2019 in San Jose was a high-performance inference demo leveraging Alveo. If you are familiar with Alveo and ML Suite, this might at first glance not seem that novel. However, what was indeed very novel was that this demonstration leveraged a brand-new inference engine. Whereas past Alveo ML inference implementations have leveraged the xDNN engine architecture, this latest demo implements a new version of the Xilinx DPU IP, specifically optimized for the Alveo U280 and Xilinx SSIT devices.
Oct-18-2019, 00:53:16 GMT