DYNAMIX: RL-based Adaptive Batch Size Optimization in Distributed Machine Learning Systems
Dai, Yuanjun, He, Keqiang, Wang, An
–arXiv.org Artificial Intelligence
Abstract--Existing batch size selection approaches in distributed machine learning rely on static allocation or simplistic heuristics that fail to adapt to heterogeneous, dynamic computing environments. We present DYNAMIX, a reinforcement learning framework that formulates batch size optimization as a sequential decision-making problem using Proximal Policy Optimization (PPO). Our approach employs a multi-dimensional state representation encompassing network-level metrics, system-level resource utilization, and training statistical efficiency indicators to enable informed decision-making across diverse computational resources. Our approach eliminates the need for explicit system modeling while integrating seamlessly with existing distributed training frameworks. Through evaluations across diverse workloads, hardware configurations, and network conditions, DY - NAMIX achieves up to 6.3% improvement in the final model accuracy and 46% reduction in the total training time. Our scalability experiments demonstrate that DYNAMIX maintains the best performance as cluster size increases to 32 nodes, while policy transfer experiments show that learned policies generalize effectively across related model architectures. Distributed machine learning (DML) has emerged as the predominant paradigm for training increasingly complex models on expansive datasets. As model architectures grow in parameter count and computational demands, practitioners increasingly rely on distributed training across multiple computational nodes to maintain feasible training timelines. Within this paradigm, batch size selection represents a critical hy-perparameter that significantly influences both training efficiency and model convergence properties. While larger batch sizes generally improve hardware utilization through increased parallelism, they may adversely affect statistical efficiency, potentially degrading convergence rates and generalization performance [19], [32]. The optimization complexity intensifies substantially in heterogeneous distributed environments, characterized by variance in computational capabilities, network characteristics, and hardware specifications across training nodes. These heterogeneous configurations arise from several practical considerations: cost optimization through spot instance utilization [12], consolidation of diverse hardware generations within organizational clusters [13], and workload deployment in multi-tenant infrastructure [15]. Under such conditions, the conventional approach of uniform batch size allocation frequently leads to suboptimal resource utilization, as demonstrated by Jia et al. [16], who observed significant throughput degradation due to synchronization barriers in heterogeneous clusters. Existing approaches to batch size optimization in distributed environments fall into several distinct categories, each exhibiting particular limitations.
arXiv.org Artificial Intelligence
Oct-10-2025
- Country:
- Asia
- China > Shanghai
- Shanghai (0.04)
- Middle East > Jordan (0.04)
- China > Shanghai
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Ohio (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (0.68)
- Technology: