Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization
Wu, Ruilong, Li, Xinjiao, Wang, Yisu, Chen, Xinyu, Kutscher, Dirk
–arXiv.org Artificial Intelligence
Hybrid parallelism techniques are essential for efficiently training large language models (LLMs). Nevertheless, current automatic parallel planning frameworks often overlook the simultaneous consideration of node heterogeneity and dynamic network topology changes, limiting their effectiveness in practical applications. In this paper, we address these limitations by modeling heterogeneous nodes within dynamically changing network environments and leveraging simulation-based strategies to determine optimal parallel configurations. Our approach enables fine-grained workload allocation tailored for heterogeneous nodes and complex network scenarios, achieving performance competitive with state-of-the-art methods under regular and stable network conditions. Additionally, we introduce a strategy pruning technique to rapidly discard infeasible parallel configurations, substantially reducing the search space and accelerating the search process through parallel execution within the simulator. Preliminary evaluations confirm that our method notably enhances training performance on heterogeneous nodes and demonstrates improved adaptability in complex, dynamic scenarios such as cloud computing environments.
arXiv.org Artificial Intelligence
Jun-4-2025
- Country:
- Asia > China
- Guangdong Province > Guangzhou (0.05)
- Hong Kong (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Netherlands > South Holland
- Rotterdam (0.04)
- Italy > Calabria
- North America
- Canada > Ontario (0.04)
- United States
- California > San Diego County
- Carlsbad (0.04)
- New York > New York County
- New York City (0.04)
- California > San Diego County
- Asia > China
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Information Technology (0.48)
- Technology: