CiD 2: Accelerating Asynchronous Communication in Decentralized Deep Learning
–Neural Information Processing Systems
Distributed training of Deep Learning models has been critical to many recent successes in the field. Current standard methods primarily rely on synchronous centralized algorithms which induce major communication bottlenecks and synchronization locks at scale.
Neural Information Processing Systems
Feb-15-2026, 23:21:52 GMT