Google Hints About Its Homegrown TPUv4 AI Engines
Google may be buying heavens only knows how many GPUs to run HPC and AI workloads on its eponymous public cloud, and it may have talked recently about how it is committed to the idea of pushing the industry to innovate at the SoC level and staying out of designing its own compute engines, but the company is still building its own Tensor Processing Units, or TPUs for short, to support its TensorFlow machine learning framework and the applications it drives within Google and as a service for Google Cloud customers. If you were expecting to get a big reveal of the TPUv4 architecture from the search engine giant and machine learning pioneer at its Google I/O 2021 conference this week, you were no doubt, like us, sorely disappointed. In his two-hour keynote address, which you can see here, Google chief executive officer Sundar Pichai, who is also CEO at Google's parent company, Alphabet, ever so briefly talked about the forthcoming TPUv4 custom ASIC designed by Google and presumably built by Taiwan Semiconductor Manufacturing Corp like every other leading-edge compute engine on Earth. As the name suggests, the TPUv4 chip is Google's fourth generation of machine learning Bfloat processing beasts, which it weaves together with host systems and networking to create what amounts to a custom supercomputer. "This is the fastest system that we have ever deployed at Google – a historic milestone for us," Pichai explained in his keynote.
May-24-2021, 06:55:36 GMT