Goto

Collaborating Authors

 engnn


Using Random Noise Equivariantly to Boost Graph Neural Networks Universally

Wang, Xiyuan, Zhang, Muhan

arXiv.org Artificial Intelligence

Recent advances in Graph Neural Networks (GNNs) have explored the potential of random noise as an input feature to enhance expressivity across diverse tasks. However, naively incorporating noise can degrade performance, while architectures tailored to exploit noise for specific tasks excel yet lack broad applicability. This paper tackles these issues by laying down a theoretical framework that elucidates the increased sample complexity when introducing random noise into GNNs without careful design. We further propose Equivariant Noise GNN (ENGNN), a novel architecture that harnesses the symmetrical properties of noise to mitigate sample complexity and bolster generalization. Our experiments demonstrate that using noise equivariantly significantly enhances performance on node-level, link-level, subgraph, and graph-level tasks and achieves comparable performance to models designed for specific tasks, thereby offering a general method to boost expressivity across various graph tasks.


ENGNN: A General Edge-Update Empowered GNN Architecture for Radio Resource Management in Wireless Networks

Wang, Yunqi, Li, Yang, Shi, Qingjiang, Wu, Yik-Chung

arXiv.org Artificial Intelligence

In order to achieve high data rate and ubiquitous connectivity in future wireless networks, a key task is to efficiently manage the radio resource by judicious beamforming and power allocation. Unfortunately, the iterative nature of the commonly applied optimization-based algorithms cannot meet the low latency requirements due to the high computational complexity. For real-time implementations, deep learning-based approaches, especially the graph neural networks (GNNs), have been demonstrated with good scalability and generalization performance due to the permutation equivariance (PE) property. However, the current architectures are only equipped with the node-update mechanism, which prohibits the applications to a more general setup, where the unknown variables are also defined on the graph edges. To fill this gap, we propose an edge-update mechanism, which enables GNNs to handle both node and edge variables and prove its PE property with respect to both transmitters and receivers. Simulation results on typical radio resource management problems demonstrate that the proposed method achieves higher sum rate but with much shorter computation time than state-of-the-art methods and generalizes well on different numbers of base stations and users, different noise variances, interference levels, and transmit power budgets. Yunqi Wang is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, and also with Shenzhen Research Institute of Big Data, Shenzhen 518172, China (email: yunqi9@connect.hku.hk). Yang Li is with Shenzhen Research Institute of Big Data, Shenzhen 518172, China (e-mail: liyang@sribd.cn).