Collaborative Learning in the Jungle (Decentralized, Byzantine, Heterogeneous, Asynchronous and Nonconvex Learning)
–Neural Information Processing Systems
We study Byzantine collaborative learning, where n nodes seek to collectively learn from each others' local data. The data distribution may vary from one node to another. No node is trusted, and f < n nodes can behave arbitrarily. We prove that collaborative learning is equivalent to a new form of agreement, which we call averaging agreement. In this problem, nodes start each with an initial vector and seek to approximately agree on a common vector, which is close to the average of honest nodes' initial vectors.
Neural Information Processing Systems
Mar-22-2025, 02:06:02 GMT