Intel Extends FPGA Ecosystem: Edge, Network, Data Center

#artificialintelligence 

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of data center acceleration needs are concerned – has compelled IT managers and chip makers to venture out, increasingly, beyond CPUs and GPUs. The "inherent parallelism" of FPGAs (see below) to handle specialized workloads in AI- and HPDA-related implementations has brought on greater investments from IT decision makers and vendors, who see increasing justification for the challenge of FPGA programming. Of course, adoption of unfamiliar technologies is always painful and slow, particularly those without a built-out ecosystem of frameworks and APIs that simplify their use. Why are FPGAs bursting out of their communication, industrial and military niches and into the data center? Partly because of the limits of CPUs, which have their roots on the desktop and were, said Steve Conway, senior research VP at Hyperion Research, never really intended for advanced computing.