ISSCC2021: Artificial intelligence chips
IBM and Samsung both presented ML processors for phones, where local AI processing will remove the need for cloud participation, so long as the architectures are powerful, and flexible enough cope with the diverse workload – able to reconfigure to suit neural networks that vary in bit-width, layer count and the other dimensions available to them. Hybrid 8bit floating point (HFP8) is a format invented at IBM (revealed in 2019) as a way of overcoming the limitations the standard 8bit (1 sign, 5 exponent, 2 mantissa) FP8 floating-point format, which works well when training certain standard neural networks, but results in poor accuracy when training others. Hybrid FP8 uses 4 exponent and 3 mantissa bits for forward propagation, then 5 exponent and 2 mantissa bits for back propagation, significantly increasing training accuracy, according to the company. The four cores are linked by a pair of wide fast data rings, one for clockwise transfer and one for anti-clockwise transfer. These can be kept closed within the chip, or opened and routed through external memory or multiple identical chips to process larger networks.
Mar-5-2021, 15:00:08 GMT