Algorithm Speeds GPU-based AI Training 10x on Big Data Sets EE Times
IBM Zurich researchers have developed a generic artificial-intelligence preprocessing building block for accelerating Big Data machine learning algorithms by at least 10 times over existing methods. The approach, which IBM presented Monday (Dec. "Our motivation was how to use hardware accelerators, such as GPUs [graphic processing units] and FPGAs [field-programmable gate arrays], when they do not have enough memory to hold all the data points" for Big Data machine learning, IBM Zurich collaborator Celestine Dünner, co-inventor of the algorithm, told EE Times in advance of the announcement. "To the best of our knowledge, we are first to have generic solution with a 10x speedup," said co-inventor Thomas Parnell, an IBM Zurich mathematician. "Specifically, for traditional, linear machine learning models -- which are widely used for data sets that are too big for neural networks to train on -- we have implemented the techniques on the best reference schemes and demonstrated a minimum of a 10x speedup."
Dec-13-2017, 18:06:47 GMT
- Country:
- Europe
- Switzerland
- United Kingdom > Wales (0.06)
- North America > United States
- California > Los Angeles County > Long Beach (0.06)
- Europe
- Industry:
- Information Technology (1.00)
- Technology: