Two Startups Use Processing in Flash Memory for AI at the Edge

IEEE Spectrum Robotics 

Irvine Calif.-based Syntiant thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. Austin, Tex.-based Mythic thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. They both might be right. A growing crowd of companies is hoping to deliver chips that accelerate otherwise onerous deep learning applications, and to some degree they all have similarities because "these are solutions that are created by the shape of the problem," explains Mythic founder and CTO Dave Fick. When executed in a CPU, that problem is shaped like a traffic jam of data.