Machine-learning (ML) solutions are proliferating across a wide variety of industries, but the overwhelming majority of the commercial implementations still rely on digital logic for their solution. With the exception of in-memory computing, analog solutions mostly have been restricted to universities and attempts at neuromorphic computing. However, that's starting to change. "Everyone's looking at the fact that deep neural networks are so energy-intensive when you implement them in digital, because you've got all these multiply-and-accumulates, and they're so deep, that they can suck up enormous amounts of power," said Elias Fallon, software engineering group director for the Custom IC & PCB Group at Cadence. Some suggest we're reaching a limit with digital. "Digital architectural approaches have hit the wall to solve the deep neural network MAC (multiply-accumulate) operations," said Sumit Vishwakarma, product manager at Siemens EDA. "As the size of the DNN increases, weight access operations result in huge energy consumption." The current analog approaches aren't attempting to define an entirely new ML paradigm. "The last 50 years have all been focused on digital processing, and for good reason," said Thomas Doyle, CEO and co-founder of Aspinity.
May-7-2021, 23:05:44 GMT