Arm Adds Muscle To Machine Learning, Embraces Bfloat16

#artificialintelligence 

Arm Holdings has announced that the next revision of its ArmV8-A architecture will include support for bfloat16, a floating point format that is increasingly being used to accelerate machine learning applications. It joins Google, Intel, and a handful of startups, all of whom are etching bfloat16 into their respective silicon. Bfloat16, aka 16-bit "brain floating point, was invented by Google and first implemented in its third-generation Tensor Processing Unit (TPU). Intel thought highly enough of the format to incorporate bfloat16 in its future "Cooper Lake" Xeon SP processors, as well in its upcoming "Spring Crest" neural network processors. Wave Computing, Habana Labs, and Flex Logix followed suit with their custom AI processors. The main idea of bfloat16 to provide a 16-bit floating point format that has the same dynamic range as a standard IEEE-FP32, but with less accuracy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found