Goto

Collaborating Authors

wireless network


China forms new plan to seize world technology crown from U.S.

The Japan Times

Beijing is accelerating its bid for global leadership in key technologies, planning to pump more than a trillion dollars into the economy through the rollout of everything from wireless networks to artificial intelligence (AI). In the master plan backed by President Xi Jinping himself, China will invest an estimated $1.4 trillion over six years to 2025, calling on urban governments and private tech giants like Huawei Technologies Co. to deploy fifth generation wireless networks, install cameras and sensors and develop AI software that will underpin technologies from autonomous driving to automated factories and mass surveillance. The new infrastructure initiative is expected to drive mainly local giants, from Alibaba and Huawei to SenseTime Group Ltd., at the expense of U.S. companies. As tech-nationalism mounts, the investment drive will reduce China's dependence on foreign technology -- echoing objectives set forth previously in the Made in China 2025 program. Such initiatives have already drawn fierce criticism from the Trump administration, resulting in moves to block the rise of Chinese technology companies such as Huawei.


Deep Learning for Radio Resource Allocation with Diverse Quality-of-Service Requirements in 5G

arXiv.org Machine Learning

To accommodate diverse Quality-of-Service (QoS) requirements in 5th generation cellular networks, base stations need real-time optimization of radio resources in time-varying network conditions. This brings high computing overheads and long processing delays. In this work, we develop a deep learning framework to approximate the optimal resource allocation policy that minimizes the total power consumption of a base station by optimizing bandwidth and transmit power allocation. We find that a fully-connected neural network (NN) cannot fully guarantee the QoS requirements due to the approximation errors and quantization errors of the numbers of subcarriers. To tackle this problem, we propose a cascaded structure of NNs, where the first NN approximates the optimal bandwidth allocation, and the second NN outputs the transmit power required to satisfy the QoS requirement with given bandwidth allocation. Considering that the distribution of wireless channels and the types of services in the wireless networks are non-stationary, we apply deep transfer learning to update NNs in non-stationary wireless networks. Simulation results validate that the cascaded NNs outperform the fully connected NN in terms of QoS guarantee. In addition, deep transfer learning can reduce the number of training samples required to train the NNs remarkably. I. INTRODUCTION A. Background The 5th Generation (5G) cellular networks are expected to support various emerging applications with diverse Quality-of-Service (QoS) requirements, such as enhanced mobile broadband services, massive This paper has been presented in part at the IEEE Global Communications Conference 2019 [1]. The authors are with the School of Electrical and Information Engineering, University of Sydney, Sydney, NSW 2006, Australia (email: {rui.dong, To guarantee the QoS requirements of different types of services, existing optimization algorithms for radio resource allocation are designed to maximize spectrum efficiency or energy efficiency by optimizing scarce radio resources, such as time-frequency resource blocks and transmit power, subject to QoS constraints [3-9]. There are two major challenges for implementing existing optimization algorithms in practical 5G networks. First, QoS constraints of some services, such as delay-sensitive and URLLC services, may not have closed-form expressions. To execute an optimization algorithm, the system needs to evaluate the QoS achieved by a certain policy via extensive simulations or experiments, and thus suffers from long processing delay [9, 10]. Second, even if the closed-form expressions of QoS constraints can be obtained in some scenarios, the optimization problems are non-convex in general [8,10,11].


Decentralized SGD with Over-the-Air Computation

arXiv.org Machine Learning

We study the performance of decentralized stochastic gradient descent (DSGD) in a wireless network, where the nodes collaboratively optimize an objective function using their local datasets. Unlike the conventional setting, where the nodes communicate over error-free orthogonal communication links, we assume that transmissions are prone to additive noise and interference.We first consider a point-to-point (P2P) transmission strategy, termed the OAC-P2P scheme, in which the node pairs are scheduled in an orthogonal fashion to minimize interference. Since in the DSGD framework, each node requires a linear combination of the neighboring models at the consensus step, we then propose the OAC-MAC scheme, which utilizes the signal superposition property of the wireless medium to achieve over-the-air computation (OAC). For both schemes, we cast the scheduling problem as a graph coloring problem. We numerically evaluate the performance of these two schemes for the MNIST image classification task under various network conditions. We show that the OAC-MAC scheme attains better convergence performance with a fewer communication rounds.


Deep Reinforcement Learning for QoS-Constrained Resource Allocation in Multiservice Networks

arXiv.org Machine Learning

In this article, we study a Radio Resource Allocation (RRA) that was formulated as a non-convex optimization problem whose main aim is to maximize the spectral efficiency subject to satisfaction guarantees in multiservice wireless systems. This problem has already been previously investigated in the literature and efficient heuristics have been proposed. However, in order to assess the performance of Machine Learning (ML) algorithms when solving optimization problems in the context of RRA, we revisit that problem and propose a solution based on a Reinforcement Learning (RL) framework. Specifically, a distributed optimization method based on multi-agent deep RL is developed, where each agent makes its decisions to find a policy by interacting with the local environment, until reaching convergence. Thus, this article focuses on an application of RL and our main proposal consists in a new deep RL based approach to jointly deal with RRA, satisfaction guarantees and Quality of Service (QoS) constraints in multiservice celular networks. Lastly, through computational simulations we compare the state-of-art solutions of the literature with our proposal and we show a near optimal performance of the latter in terms of throughput and outage rate.


Learning-Based Link Scheduling in Millimeter-wave Multi-connectivity Scenarios

arXiv.org Machine Learning

Multi-connectivity is emerging as a promising solution to provide reliable communications and seamless connectivity for the millimeter-wave frequency range. Due to the blockage sensitivity at such high frequencies, connectivity with multiple cells can drastically increase the network performance in terms of throughput and reliability. However, an inefficient link scheduling, i.e., over and under-provisioning of connections, can lead either to high interference and energy consumption or to unsatisfied user's quality of service (QoS) requirements. In this work, we present a learning-based solution that is able to learn and then to predict the optimal link scheduling to satisfy users' QoS requirements while avoiding communication interruptions. Moreover, we compare the proposed approach with two base line methods and the genie-aided link scheduling that assumes perfect channel knowledge. We show that the learning-based solution approaches the optimum and outperforms the base line methods.


Scalable Learning Paradigms for Data-Driven Wireless Communication

arXiv.org Machine Learning

The marriage of wireless big data and machine learning techniques revolutionizes the wireless system by the data-driven philosophy. However, the ever exploding data volume and model complexity will limit centralized solutions to learn and respond within a reasonable time. Therefore, scalability becomes a critical issue to be solved. In this article, we aim to provide a systematic discussion on the building blocks of scalable data-driven wireless networks. On one hand, we discuss the forward-looking architecture and computing framework of scalable data-driven systems from a global perspective. On the other hand, we discuss the learning algorithms and model training strategies performed at each individual node from a local perspective. We also highlight several promising research directions in the context of scalable data-driven wireless communications to inspire future research.


Deep Learning for Content-based Personalized Viewport Prediction of 360-Degree VR Videos

arXiv.org Machine Learning

In this paper, the problem of head movement prediction for virtual reality videos is studied. In the considered model, a deep learning network is introduced to leverage position data as well as video frame content to predict future head movement. For optimizing data input into this neural network, data sample rate, reduced data, and long-period prediction length are also explored for this model. Simulation results show that the proposed approach yields 16.1\% improvement in terms of prediction accuracy compared to a baseline approach that relies only on the position data.


Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G Networks

arXiv.org Machine Learning

In the future 6th generation networks, ultra-reliable and low-latency communications (URLLC) will lay the foundation for emerging mission-critical applications that have stringent requirements on end-to-end delay and reliability. Existing works on URLLC are mainly based on theoretical models and assumptions. The model-based solutions provide useful insights, but cannot be directly implemented in practice. In this article, we first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC, and discuss some open problems of these methods. To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC. The basic idea is to merge theoretical models and real-world data in analyzing the latency and reliability and training deep neural networks (DNNs). Deep transfer learning is adopted in the architecture to fine-tune the pre-trained DNNs in non-stationary networks. Further considering that the computing capacity at each user and each mobile edge computing server is limited, federated learning is applied to improve the learning efficiency. Finally, we provide some experimental and simulation results and discuss some future directions.


Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable Edge Computing Systems

arXiv.org Machine Learning

The stringent requirements of mobile edge computing (MEC) applications and functions fathom the high capacity and dense deployment of MEC hosts to the upcoming wireless networks. However, operating such high capacity MEC hosts can significantly increase energy consumption. Thus, a BS unit can act as a self-powered BS. In this paper, an effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied. First, a two-stage linear stochastic programming problem is formulated with the goal of minimizing the total energy consumption cost of the system while fulfilling the energy demand. Second, a semi-distributed data-driven solution is proposed by developing a novel multi-agent meta-reinforcement learning (MAMRL) framework to solve the formulated problem. In particular, each BS plays the role of a local agent that explores a Markovian behavior for both energy consumption and generation while each BS transfers time-varying features to a meta-agent. Sequentially, the meta-agent optimizes (i.e., exploits) the energy dispatch decision by accepting only the observations from each local agent with its own state information. Meanwhile, each BS agent estimates its own energy dispatch policy by applying the learned parameters from meta-agent. Finally, the proposed MAMRL framework is benchmarked by analyzing deterministic, asymmetric, and stochastic environments in terms of non-renewable energy usages, energy cost, and accuracy. Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost (with 95.8% prediction accuracy), compared to other baseline methods.


Cybersecurity tool uses machine learning, honeypots to stop attacks

#artificialintelligence

In recent months, the FBI issued a high-impact cybersecurity warning in response to increasing attacks on government targets. Government officials have warned major cities such hacks are a disturbing trend likely to continue. Purdue University researchers may help stop some of those threats with a tool designed to alert organizations to cyberattacks. The system is called LIDAR – which stands for lifelong, intelligent, diverse, agile and robust. "The name for this architecture for network security really defines its significant attributes," said Aly El Gamal, an assistant professor of electrical and computer engineering in Purdue's College of Engineering.