adaptive compute acceleration platform
Adaptive compute acceleration platform for PCIe
Based around the Xilinx Versal AI Core series, the ADM-PA100 offers fully customizable IO and meets requirements for a range of markets including data center, machine learning, HPC, scientific instrumentation, and test and measurement. The Versal AI Core series includes an array of Xilinx AI engines (dedicated VLIW processors, capable of vector math processing at compute densities 5x higher than programmable logic), closely coupled with programmable logic allowing highly efficient implementation of custom coprocessing operations in this data flow. The Xilinx Versal series of devices also feature an on-chip programmable network on chip (NoC) that improves on-chip programmable logic routing in large designs), dedicated hardened IP for multi-rate 100G Ethernet, hardened PCIe Gen4 endpoints with DMA outside the programmable logic, hardened DDR4 memory controllers, built in ARM A72 and R5F CPUs, and programmable logic and DSP performance a generation on from UltraScale devices, says the company. Manuel Uhm, director of Silicon Marketing at Xilinx says, "The hardware adaptability and heterogeneous architecture of Versal AI Core ACAPs are a key advantage over traditional accelerators that typically focus on a subset of applications. This enables the creation of multiple domain specific architectures targeted to specific workloads. We're delighted that Alpha Data has chosen Versal AI Core series for its ADM-PA100 board to accelerate a breadth of workloads in cloud, networking, and edge markets."
Xilinx is Expanding the Boundaries of FPGA Acceleration with Versal and the Adaptive Compute Acceleration Platform (ACAP) - DornerWorks
Being innovative--and staying innovative--means being able to adapt. Xilinx is proving just how adaptive technology can be by bringing the hardware programmability of a Field Programmable Gate Array (FPGA) to cloud computing, big data, and artificial intelligence. The Adaptive Compute Acceleration Platform (ACAP), the first release of which is Versal features hardware-programmable DSP blocks, a multicore System-on-Chip (SoC), distributed memory, and an adaptive, software programmable compute engine, all populated on FPGA fabric and connected through a Network-on-Chip (NoC). According to Xilinx, Versal's capabilities are further bolstered with highly integrated programmable I/O functionality. FPGAs accelerate algorithms, yet development comes with a steep learning curve.
Neural-Network Hardware Drives the Latest Machine-Learning Craze
Artificial-intelligence (AI) research covers a number of topics, including machine learning (ML). ML covers a lot of ground as well, from rule-based expert systems to the latest hot trend--neural networks. Neural networks are changing how developers solve problems, whether it be self-driving cars or the industrial Internet of Things (IIoT). Neural networks come in many forms, but deep neural networks (DNNs) are the most important at this point. A DNN consists of multiple layers, including input and output layers plus multiple hidden layers (Figure 1).