Goto

Collaborating Authors

 cubicml


CubicML: Automated ML for Distributed ML Systems Co-design with ML Prediction of Performance

Wen, Wei, Zhu, Quanyu, Chu, Weiwei, Chen, Wen-Yen, Yang, Jiyan

arXiv.org Artificial Intelligence

Scaling up deep learning models have been proven effective to improve intelligence of machine learning (ML) models, especially for industry recommendation models and large language models. The co-design of distributed ML systems and algorithms (to maximize training performance) plays a pivotal role for its success. As it scales, the number of co-design hyper-parameters grows rapidly which brings challenges to feasibly find the optimal setup for system performance maximization. In this paper, we propose CubicML which uses ML to automatically optimize training performance of distributed ML systems. In CubicML, we use a ML model as a proxy to predict the training performance for search efficiency and performance modeling flexibility. We proved that CubicML can effectively optimize training speed of in-house ads recommendation models and large language models at Meta.