Goto

Collaborating Authors

 Chaaban, Anas


Federated Testing (FedTest): A New Scheme to Enhance Convergence and Mitigate Adversarial Attacks in Federating Learning

arXiv.org Artificial Intelligence

Federated Learning (FL) has emerged as a significant paradigm for training machine learning models. This is due to its data-privacy-preserving property and its efficient exploitation of distributed computational resources. This is achieved by conducting the training process in parallel at distributed users. However, traditional FL strategies grapple with difficulties in evaluating the quality of received models, handling unbalanced models, and reducing the impact of detrimental models. To resolve these problems, we introduce a novel federated learning framework, which we call federated testing for federated learning (FedTest). In the FedTest method, the local data of a specific user is used to train the model of that user and test the models of the other users. This approach enables users to test each other's models and determine an accurate score for each. This score can then be used to aggregate the models efficiently and identify any malicious ones. Our numerical results reveal that the proposed method not only accelerates convergence rates but also diminishes the potential influence of malicious users. This significantly enhances the overall efficiency and robustness of FL systems.


Coding for the Gaussian Channel in the Finite Blocklength Regime Using a CNN-Autoencoder

arXiv.org Artificial Intelligence

The development of delay-sensitive applications that require ultra high reliability created an additional challenge for wireless networks. This led to Ultra-Reliable Low-Latency Communications, as a use case that 5G and beyond 5G systems must support. However, supporting low latency communications requires the use of short codes, while attaining vanishing frame error probability (FEP) requires long codes. Thus, developing codes for the finite blocklength regime (FBR) achieving certain reliability requirements is necessary. This paper investigates the potential of Convolutional Neural Networks autoencoders (CNN-AE) in approaching the theoretical maximum achievable rate over a Gaussian channel for a range of signal-to-noise ratios at a fixed blocklength and target FEP, which is a different perspective compared to existing works that explore the use of CNNs from bit-error and symbol-error rate perspectives. We explain the studied CNN-AE architecture, evaluate it numerically, and compare it to the theoretical maximum achievable rate and the achievable rates of polar coded quadrature amplitude modulation (QAM), Reed-Muller coded QAM, multilevel polar coded modulation, and a TurboAE-MOD scheme from the literature. Numerical results show that the CNN-AE outperforms these benchmark schemes and approaches the theoretical maximum rate, demonstrating the capability of CNN-AEs in learning good codes for delay-constrained applications.