Goto

Collaborating Authors

Results


Deep Learning for Radio Resource Allocation with Diverse Quality-of-Service Requirements in 5G

arXiv.org Machine Learning

To accommodate diverse Quality-of-Service (QoS) requirements in 5th generation cellular networks, base stations need real-time optimization of radio resources in time-varying network conditions. This brings high computing overheads and long processing delays. In this work, we develop a deep learning framework to approximate the optimal resource allocation policy that minimizes the total power consumption of a base station by optimizing bandwidth and transmit power allocation. We find that a fully-connected neural network (NN) cannot fully guarantee the QoS requirements due to the approximation errors and quantization errors of the numbers of subcarriers. To tackle this problem, we propose a cascaded structure of NNs, where the first NN approximates the optimal bandwidth allocation, and the second NN outputs the transmit power required to satisfy the QoS requirement with given bandwidth allocation. Considering that the distribution of wireless channels and the types of services in the wireless networks are non-stationary, we apply deep transfer learning to update NNs in non-stationary wireless networks. Simulation results validate that the cascaded NNs outperform the fully connected NN in terms of QoS guarantee. In addition, deep transfer learning can reduce the number of training samples required to train the NNs remarkably. I. INTRODUCTION A. Background The 5th Generation (5G) cellular networks are expected to support various emerging applications with diverse Quality-of-Service (QoS) requirements, such as enhanced mobile broadband services, massive This paper has been presented in part at the IEEE Global Communications Conference 2019 [1]. The authors are with the School of Electrical and Information Engineering, University of Sydney, Sydney, NSW 2006, Australia (email: {rui.dong, To guarantee the QoS requirements of different types of services, existing optimization algorithms for radio resource allocation are designed to maximize spectrum efficiency or energy efficiency by optimizing scarce radio resources, such as time-frequency resource blocks and transmit power, subject to QoS constraints [3-9]. There are two major challenges for implementing existing optimization algorithms in practical 5G networks. First, QoS constraints of some services, such as delay-sensitive and URLLC services, may not have closed-form expressions. To execute an optimization algorithm, the system needs to evaluate the QoS achieved by a certain policy via extensive simulations or experiments, and thus suffers from long processing delay [9, 10]. Second, even if the closed-form expressions of QoS constraints can be obtained in some scenarios, the optimization problems are non-convex in general [8,10,11].


Federated Learning for Task and Resource Allocation in Wireless High Altitude Balloon Networks

arXiv.org Machine Learning

In this paper, the problem of minimizing energy and time consumption for task computation and transmission is studied in a mobile edge computing (MEC)-enabled balloon network. In the considered network, each user needs to process a computational task in each time instant, where high-altitude balloons (HABs), acting as flying wireless base stations, can use their powerful computational abilities to process the tasks offloaded from their associated users. Since the data size of each user's computational task varies over time, the HABs must dynamically adjust the user association, service sequence, and task partition scheme to meet the users' needs. This problem is posed as an optimization problem whose goal is to minimize the energy and time consumption for task computing and transmission by adjusting the user association, service sequence, and task allocation scheme. To solve this problem, a support vector machine (SVM)-based federated learning (FL) algorithm is proposed to determine the user association proactively. The proposed SVM-based FL method enables each HAB to cooperatively build an SVM model that can determine all user associations without any transmissions of either user historical associations or computational tasks to other HABs. Given the prediction of the optimal user association, the service sequence and task allocation of each user can be optimized so as to minimize the weighted sum of the energy and time consumption. Simulations with real data of city cellular traffic from the OMNILab at Shanghai Jiao Tong University show that the proposed algorithm can reduce the weighted sum of the energy and time consumption of all users by up to 16.1% compared to a conventional centralized method.


Neighborhood Information-based Probabilistic Algorithm for Network Disintegration

arXiv.org Artificial Intelligence

Many real-world applications can be modelled as complex networks, and such networks include the Internet, epidemic disease networks, transport networks, power grids, protein-folding structures and others. Network integrity and robustness are important to ensure that crucial networks are protected and undesired harmful networks can be dismantled. Network structure and integrity can be controlled by a set of key nodes, and to find the optimal combination of nodes in a network to ensure network structure and integrity can be an NP-complete problem. Despite extensive studies, existing methods have many limitations and there are still many unresolved problems. This paper presents a probabilistic approach based on neighborhood information and node importance, namely, neighborhood information-based probabilistic algorithm (NIPA). We also define a new centrality-based importance measure (IM), which combines the contribution ratios of the neighbor nodes of each target node and two-hop node information. Our proposed NIPA has been tested for different network benchmarks and compared with three other methods: optimal attack strategy (OAS), high betweenness first (HBF) and high degree first (HDF). Experiments suggest that the proposed NIPA is most effective among all four methods. In general, NIPA can identify the most crucial node combination with higher effectiveness, and the set of optimal key nodes found by our proposed NIPA is much smaller than that by heuristic centrality prediction. In addition, many previously neglected weakly connected nodes are identified, which become a crucial part of the newly identified optimal nodes. Thus, revised strategies for protection are recommended to ensure the safeguard of network integrity. Further key issues and future research topics are also discussed.


Recovering compressed images for automatic crack segmentation using generative models

arXiv.org Machine Learning

In a structural health monitoring (SHM) system that uses digital cameras to monitor cracks of structural surfaces, techniques for reliable and effective data compression are essential to ensure a stable and energy efficient crack images transmission in wireless devices, e.g., drones and robots with high definition cameras installed. Compressive sensing (CS) is a signal processing technique that allows accurate recovery of a signal from a sampling rate much smaller than the limitation of the Nyquist sampling theorem. The conventional CS method is based on the principle that, through a regularized optimization, the sparsity property of the original signals in some domain can be exploited to get the exact reconstruction with a high probability. However, the strong assumption of the signals being highly sparse in an invertible space is relatively hard for real crack images. In this paper, we present a new approach of CS that replaces the sparsity regularization with a generative model that is able to effectively capture a low dimension representation of targeted images. We develop a recovery framework for automatic crack segmentation of compressed crack images based on this new CS method and demonstrate the remarkable performance of the method taking advantage of the strong capability of generative models to capture the necessary features required in the crack segmentation task even the backgrounds of the generated images are not well reconstructed. The superior performance of our recovery framework is illustrated by comparing with three existing CS algorithms. Furthermore, we show that our framework is extensible to other common problems in automatic crack segmentation, such as defect recovery from motion blurring and occlusion.


Federated Learning for Resource-Constrained IoT Devices: Panoramas and State-of-the-art

arXiv.org Machine Learning

Nowadays, devices are equipped with advanced sensors with higher processing/computing capabilities. Further, widespread Internet availability enables communication among sensing devices. As a result, vast amounts of data are generated on edge devices to drive Internet-of-Things (IoT), crowdsourcing, and other emerging technologies. The collected extensive data can be pre-processed, scaled, classified, and finally, used for predicting future events using machine learning (ML) methods. In traditional ML approaches, data is sent to and processed in a central server, which encounters communication overhead, processing delay, privacy leakage, and security issues. To overcome these challenges, each client can be trained locally based on its available data and by learning from the global model. This decentralized learning structure is referred to as Federated Learning (FL). However, in large-scale networks, there may be clients with varying computational resource capabilities. This may lead to implementation and scalability challenges for FL techniques. In this paper, we first introduce some recently implemented real-life applications of FL. We then emphasize on the core challenges of implementing the FL algorithms from the perspective of resource limitations (e.g., memory, bandwidth, and energy budget) of client clients. We finally discuss open issues associated with FL and highlight future directions in the FL area concerning resource-constrained devices.


BB_Evac: Fast Location-Sensitive Behavior-Based Building Evacuation

arXiv.org Artificial Intelligence

Prime examples, include the World Trade Center and Pentagon in 2001. Other buildings that needed evacuation during terror attacks include the Westfield Mall in Kenya, and the Taj and Oberoi Hotels in Mumbai. In November 2015, at least two major airports (London and Miami) had to be partly evacuated. These situations have led to the development of work on building evacuation models in both the operations research [1, 2, 3] and AI communities [4, 5, 6]. Yet, all of these works have been based on the assumption that in an emergency, people will do what they are told. However, if you are in a building at location L and a fire or terrorist attack or earthquake occurs and you are told to move along a given route to an exit e that you know is further away than the nearest exit e ′, would you do so? Often, the answer is no. Past works on building evacuations assume people will do what they are told and that they will not select mechanisms that are individually optimal, but globally sub-optimal. There is a long history of work in firefighting and emergency response communities on understanding human behavior in such emergencies.


Wireless Power Control via Counterfactual Optimization of Graph Neural Networks

arXiv.org Machine Learning

We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating with each other over a single shared wireless medium. To mitigate the interference among concurrent transmissions, we leverage the network topology to create a graph neural network architecture, and we then use an unsupervised primal-dual counterfactual optimization approach to learn optimal power allocation decisions. We show how the counterfactual optimization technique allows us to guarantee a minimum rate constraint, which adapts to the network size, hence achieving the right balance between average and $5^{th}$ percentile user rates throughout a range of network configurations.


Network Flow Algorithms for Structured Sparsity

Neural Information Processing Systems

Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of groups and variables, and opens up a whole new range of applications for structured sparse models.


Large-Scale Price Optimization via Network Flow

Neural Information Processing Systems

This paper deals with price optimization, which is to find the best pricing strategy that maximizes revenue or profit, on the basis of demand forecasting models. Though recent advances in regression technologies have made it possible to reveal price-demand relationship of a number of multiple products, most existing price optimization methods, such as mixed integer programming formulation, cannot handle tens or hundreds of products because of their high computational costs. To cope with this problem, this paper proposes a novel approach based on network flow algorithms. We reveal a connection between supermodularity of the revenue and cross elasticity of demand. On the basis of this connection, we propose an efficient algorithm that employs network flow algorithms.


Stochastic Network Design in Bidirected Trees

Neural Information Processing Systems

We investigate the problem of stochastic network design in bidirected trees. In this problem, an underlying phenomenon (e.g., a behavior, rumor, or disease) starts at multiple sources in a tree and spreads in both directions along its edges. Actions can be taken to increase the probability of propagation on edges, and the goal is to maximize the total amount of spread away from all sources. Our main result is a rounded dynamic programming approach that leads to a fully polynomial-time approximation scheme (FPTAS), that is, an algorithm that can find (1 ε)-optimal solutions for any problem instance in time polynomial in the input size and 1/ε. Our algorithm outperforms competing approaches on a motivating problem from computational sustainability to remove barriers in river networks to restore the health of aquatic ecosystems.