Disaggregated Multi-Tower: Topology-aware Modeling Technique for Efficient Large-Scale Recommendation

Luo, Liang, Zhang, Buyun, Tsang, Michael, Ma, Yinbin, Chu, Ching-Hsiang, Chen, Yuxin, Li, Shen, Hao, Yuchen, Zhao, Yanli, Lakshminarayanan, Guna, Wen, Ellie Dingqiao, Park, Jongsoo, Mudigere, Dheevatsa, Naumov, Maxim

arXiv.org Artificial Intelligence 

We study a mismatch between the deep learning recommendation models' flat architecture, common distributed training paradigm and hierarchical data center topology. To address the associated inefficiencies, we propose Disaggregated Multi-Tower (DMT), a modeling technique that consists of (1) Semantic-preserving Tower Transform (SPTT), a novel training paradigm that decomposes the monolithic global embedding lookup process into disjoint towers to exploit data center locality; (2) Tower Module (TM), a synergistic dense component attached to each tower to reduce model complexity and communication volume through hierarchical feature interaction; and (3) Tower Partitioner (TP), a feature partitioner to systematically create towers with meaningful feature interactions and load balanced assignments to preserve model quality and training throughput via learned embeddings. We show that DMT can achieve up to 1.9 speedup compared to the state-of-the-art baselines without losing accuracy across multiple generations of hardware at large data center scales. Since the embedding tables can be huge, the state-of-the-art practices train these models in a hybrid fashion: the sparse are synchronized through AllReduce operations. Nvidia, work done while at Meta.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found