Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices
–Neural Information Processing Systems
Federated learning (FL) is usually performed on resource-constrained edge devices, e.g., with limited memory for the computation. If the required memory to train a model exceeds this limit, the device will be excluded from the training. This can lead to a lower accuracy as valuable data and computation resources are excluded from training, also causing bias and unfairness. The FL training process should be adjusted to such constraints. The state-of-the-art techniques propose training subsets of the FL model at constrained devices, reducing their resource requirements for training.
Neural Information Processing Systems
Jan-19-2025, 05:02:02 GMT
- Technology: