Flex Logix Improves Deep Learning Performance By 10X With New EFLX4K AI eFPGA Core
This new core has been specifically designed to enhance the performance of deep learning by 10X and enable more neural network processing per square millimeter. Many companies are using FPGA to implement AI and more specifically machine learning, deep learning and neural networks as approaches to achieve AI. The key function needed for AI are matrix multipliers, which consist of arrays of MACs (multiplier accumulators). In existing FPGA and eFPGAs, the MACs are optimized for DSPs with larger multipliers, pre-adders and other logic which are overkill for AI. For AI applications, smaller multipliers such as 16 bits or 8 bits, with the ability to support both modes with accumulators, allow more neural network processing per square millimeter.
Jun-25-2018, 12:51:16 GMT
- Country:
- Asia
- China (0.05)
- Japan (0.05)
- Middle East > Israel (0.05)
- Taiwan (0.05)
- Europe (0.05)
- North America > United States
- California > Santa Clara County > Mountain View (0.16)
- Asia
- Technology: