Algorithm Speeds GPU-based AI Training 10x on Big Data Sets EE Times

#artificialintelligence 

IBM Zurich researchers have developed a generic artificial-intelligence preprocessing building block for accelerating Big Data machine learning algorithms by at least 10 times over existing methods. The approach, which IBM presented Monday (Dec. "Our motivation was how to use hardware accelerators, such as GPUs [graphic processing units] and FPGAs [field-programmable gate arrays], when they do not have enough memory to hold all the data points" for Big Data machine learning, IBM Zurich collaborator Celestine Dünner, co-inventor of the algorithm, told EE Times in advance of the announcement. "To the best of our knowledge, we are first to have generic solution with a 10x speedup," said co-inventor Thomas Parnell, an IBM Zurich mathematician. "Specifically, for traditional, linear machine learning models -- which are widely used for data sets that are too big for neural networks to train on -- we have implemented the techniques on the best reference schemes and demonstrated a minimum of a 10x speedup."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found