More efficient machine learning could upend the AI paradigm
In January, Google launched a new service called Cloud AutoML, which can automate some tricky aspects of designing machine-learning software. While working on this project, the company's researchers sometimes needed to run as many as 800 graphics chips in unison to train their powerful algorithms. Unlike humans, who can recognize coffee cups from seeing one or two examples, AI networks based on simulated neurons need to see tens of thousands of examples in order to identify an object. Imagine trying to learn to recognize every item in your environment that way, and you begin to understand why AI software requires so much computing power. If researchers could design neural networks that could be trained to do certain tasks using only a handful of examples, it would "upend the whole paradigm," Charles Bergan, vice president of engineering at Qualcomm, told the crowd at MIT Technology Review's EmTech China conference earlier this week.
Feb-3-2018, 20:00:32 GMT
- Technology: