Goto

Collaborating Authors

 Buonanno, Amedeo


A General Framework for Scalable UE-AP Association in User-Centric Cell-Free Massive MIMO based on Recurrent Neural Networks

arXiv.org Machine Learning

This study addresses the challenge of access point (AP) and user equipment (UE) association in cell-free massive MIMO networks. It introduces a deep learning algorithm leveraging Bidirectional Long Short-Term Memory cells and a hybrid probabilistic methodology for weight updating. This approach enhances scalability by adapting to variations in the number of UEs without requiring retraining. Additionally, the study presents a training methodology that improves scalability not only with respect to the number of UEs but also to the number of APs. Furthermore, a variant of the proposed AP-UE algorithm ensures robustness against pilot contamination effects, a critical issue arising from pilot reuse in channel estimation. Extensive numerical results validate the effectiveness and adaptability of the proposed methods, demonstrating their superiority over widely used heuristic alternatives.


A Deep Learning Approach for User-Centric Clustering in Cell-Free Massive MIMO Systems

arXiv.org Artificial Intelligence

Contrary to conventional massive MIMO cellular configurations plagued by inter-cell interference, cell-free massive MIMO systems distribute network resources across the coverage area, enabling users to connect with multiple access points (APs) and boosting both system capacity and fairness across user. In such systems, one critical functionality is the association between APs and users: determining the optimal association is indeed a combinatorial problem of prohibitive complexity. In this paper, a solution based on deep learning is thus proposed to solve the user clustering problem aimed at maximizing the sum spectral efficiency while controlling the number of active connections. The proposed solution can scale effectively with the number of users, leveraging long short-term memory cells to operate without the need for retraining. Numerical results show the effectiveness of the proposed solution, even in the presence of imperfect channel state information due to pilot contamination.


A Unified View of Algorithms for Path Planning Using Probabilistic Inference on Factor Graphs

arXiv.org Machine Learning

Even if path planning can be solved using standard techniques from dynamic programming and control, the problem can also be approached using probabilistic inference. The algorithms that emerge using the latter framework bear some appealing characteristics that qualify the probabilistic approach as a powerful alternative to the more traditional control formulations. The idea of using estimation on stochastic models to solve control problems is not new and the inference approach considered here falls under the rubric of Active Inference (AI) and Control as Inference (CAI). In this work, we look at the specific recursions that arise from various cost functions that, although they may appear similar in scope, bear noticeable differences, at least when applied to typical path planning problems. We start by posing the path planning problem on a probabilistic factor graph, and show how the various algorithms translate into specific message composition rules. We then show how this unified approach, presented both in probability space and in log space, provides a very general framework that includes the Sum-product, the Max-product, Dynamic programming and mixed Reward/Entropy criteria-based algorithms. The framework also expands algorithmic design options for smoother or sharper policy distributions, including generalized Sum/Max-product algorithm, a Smooth Dynamic programming algorithm and modified versions of the Reward/Entropy recursions. We provide a comprehensive table of recursions and a comparison through simulations, first on a synthetic small grid with a single goal with obstacles, and then on a grid extrapolated from a real-world scene with multiple goals and a semantic map.


Path Planning Using Probability Tensor Flows

arXiv.org Artificial Intelligence

Probability models have been proposed in the literature to account for "intelligent" behavior in many contexts. In this paper, probability propagation is applied to model agent's motion in potentially complex scenarios that include goals and obstacles. The backward flow provides precious background information to the agent's behavior, viz., inferences coming from the future determine the agent's actions. Probability tensors are layered in time in both directions in a manner similar to convolutional neural networks. The discussion is carried out with reference to a set of simulated grids where, despite the apparent task complexity, a solution, if feasible, is always found. The original model proposed by Attias has been extended to include non-absorbing obstacles, multiple goals and multiple agents. The emerging behaviors are very realistic and demonstrate great potentials of the application of this framework to real environments.


Optimized Realization of Bayesian Networks in Reduced Normal Form using Latent Variable Model

arXiv.org Machine Learning

Bayesian networks in their Factor Graph Reduced Normal Form (FGrn) are a powerful paradigm for implementing inference graphs. Unfortunately, the computational and memory costs of these networks may be considerable, even for relatively small networks, and this is one of the main reasons why these structures have often been underused in practice. In this work, through a detailed algorithmic and structural analysis, various solutions for cost reduction are proposed. An online version of the classic batch learning algorithm is also analyzed, showing very similar results (in an unsupervised context); which is essential even if multilevel structures are to be built. The solutions proposed, together with the possible online learning algorithm, are included in a C++ library that is quite efficient, especially if compared to the direct use of the well-known sum-product and Maximum Likelihood (ML) algorithms. The results are discussed with particular reference to a Latent Variable Model (LVM) structure.


Discrete Independent Component Analysis (DICA) with Belief Propagation

arXiv.org Machine Learning

We apply belief propagation to a Bayesian bipartite graph composed of discrete independent hidden variables and discrete visible variables. The network is the Discrete counterpart of Independent Component Analysis (DICA) and it is manipulated in a factor graph form for inference and learning. A full set of simulations is reported for character images from the MNIST dataset. The results show that the factorial code implemented by the sources contributes to build a good generative model for the data that can be used in various inference modes.