Teasing Out The Bang For The Buck Of Inference Engines

#artificialintelligence 

In this case, the benchmarks are for running the GoogLetNet V1 convolutional neural network framework, with a batch size of 1. (Meaning that items to be identified are sent through in serial fashion rather than batched up to be chewed on all at once.) This framework came close to beating humans at image recognition, but it took Microsoft's ResNet in 2015 to accomplish this feat, with a 3.57 percent failure rate compared to humans at 5.1 percent. The baseline for performance that Xilinx chose was the smallest F1 FPGA-accelerated instance on the EC2 compute cloud at Amazon Web Services. This instance has a single Virtex UltraScale VU9P FPGA on it, which has 1.182 million LUTs, which is attached to a server slice that has eight vCPUs (Based on the "Broadwell" Xeon E5-2696 v4 processor and 122 GB of main memory.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found