ARIA: On the interaction between Architectures, Aggregation methods and Initializations in federated visual classification
Siomos, Vasilis, Naval-Marimont, Sergio, Passerat-Palmbach, Jonathan, Tarroni, Giacomo
–arXiv.org Artificial Intelligence
It's important to note IN pre-training restricts the input to 224x224 RGB images. When up-sampling of the original images Federated Learning (FL) is a collaborative training paradigm that is required to achieve that, it leads to a bigger than necessary allows for privacy-preserving learning of cross-institutional models computational and memory load, and the introduction of aliasing by eliminating the exchange of sensitive data and instead relying artifacts (e.g. Figure 1). When down-sampling is required instead, on the exchange of model parameters between the clients and a it can degrade performance. Hence, IN pre-training is not a silver server. Despite individual studies on how client models are aggregated, bullet, and benchmarking architectures and aggregation strategies and, more recently, on the benefits of ImageNet pre-training, without pre-training is also important. Furthermore, task-relevant there is a lack of understanding of the effect the architecture chosen pre-training through self-supervised learning (SSL) has recently for the federation has, and of how the aforementioned elements emerged as a highly-effective alternative to IN pre-training [9], but interconnect.
arXiv.org Artificial Intelligence
Nov-24-2023
- Country:
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Germany > Bavaria
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (1.00)
- Technology: