Yu, Wei
On the Choice of Perception Loss Function for Learned Video Compression
Salehkalaibar, Sadaf, Phan, Buu, Chen, Jun, Yu, Wei, Khisti, Ashish
We study causal, low-latency, sequential video compression when the output is subjected to both a mean squared-error (MSE) distortion loss as well as a perception loss to target realism. Motivated by prior approaches, we consider two different perception loss functions (PLFs). The first, PLF-JD, considers the joint distribution (JD) of all the video frames up to the current one, while the second metric, PLF-FMD, considers the framewise marginal distributions (FMD) between the source and reconstruction. Using information theoretic analysis and deep-learning based experiments, we demonstrate that the choice of PLF can have a significant effect on the reconstruction, especially at low-bit rates. In particular, while the reconstruction based on PLF-JD can better preserve the temporal correlation across frames, it also imposes a significant penalty in distortion compared to PLF-FMD and further makes it more difficult to recover from errors made in the earlier output frames. Although the choice of PLF decisively affects reconstruction quality, we also demonstrate that it may not be essential to commit to a particular PLF during encoding and the choice of PLF can be delegated to the decoder. In particular, encoded representations generated by training a system to minimize the MSE (without requiring either PLF) can be {\em near universal} and can generate close to optimal reconstructions for either choice of PLF at the decoder. We validate our results using (one-shot) information-theoretic analysis, detailed study of the rate-distortion-perception tradeoff of the Gauss-Markov source model as well as deep-learning based experiments on moving MNIST and KTH datasets.
Reinforcement Learning with Non-Cumulative Objective
Cui, Wei, Yu, Wei
In reinforcement learning, the objective is almost always defined as a \emph{cumulative} function over the rewards along the process. However, there are many optimal control and reinforcement learning problems in various application fields, especially in communications and networking, where the objectives are not naturally expressed as summations of the rewards. In this paper, we recognize the prevalence of non-cumulative objectives in various problems, and propose a modification to existing algorithms for optimizing such objectives. Specifically, we dive into the fundamental building block for many optimal control and reinforcement learning algorithms: the Bellman optimality equation. To optimize a non-cumulative objective, we replace the original summation operation in the Bellman update rule with a generalized operation corresponding to the objective. Furthermore, we provide sufficient conditions on the form of the generalized operation as well as assumptions on the Markov decision process under which the globally optimal convergence of the generalized Bellman updates can be guaranteed. We demonstrate the idea experimentally with the bottleneck objective, i.e., the objectives determined by the minimum reward along the process, on classical optimal control and reinforcement learning tasks, as well as on two network routing problems on maximizing the flow rates.
Uncertainty Injection: A Deep Learning Method for Robust Optimization
Cui, Wei, Yu, Wei
This paper proposes a paradigm of uncertainty injection for training deep learning model to solve robust optimization problems. The majority of existing studies on deep learning focus on the model learning capability, while assuming the quality and accuracy of the inputs data can be guaranteed. However, in realistic applications of deep learning for solving optimization problems, the accuracy of inputs, which are the problem parameters in this case, plays a large role. This is because, in many situations, it is often costly or sometime impossible to obtain the problem parameters accurately, and correspondingly, it is highly desirable to develop learning algorithms that can account for the uncertainties in the input and produce solutions that are robust against these uncertainties. This paper presents a novel uncertainty injection scheme for training machine learning models that are capable of implicitly accounting for the uncertainties and producing statistically robust solutions. We further identify the wireless communications as an application field where uncertainties are prevalent in problem parameters such as the channel coefficients. We show the effectiveness of the proposed training scheme in two applications: the robust power loading for multiuser multiple-input-multiple-output (MIMO) downlink transmissions; and the robust power control for device-to-device (D2D) networks.
What Makes an Effective Scalarising Function for Multi-Objective Bayesian Optimisation?
Stock-Williams, Clym, Chugh, Tinkle, Rahat, Alma, Yu, Wei
Performing multi-objective Bayesian optimisation by scalarising the objectives avoids the computation of expensive multi-dimensional integral-based acquisition functions, instead of allowing one-dimensional standard acquisition functions\textemdash such as Expected Improvement\textemdash to be applied. Here, two infill criteria based on hypervolume improvement\textemdash one recently introduced and one novel\textemdash are compared with the multi-surrogate Expected Hypervolume Improvement. The reasons for the disparities in these methods' effectiveness in maximising the hypervolume of the acquired Pareto Front are investigated. In addition, the effect of the surrogate model mean function on exploration and exploitation is examined: careful choice of data normalisation is shown to be preferable to the exploration parameter commonly used with the Expected Improvement acquisition function. Finally, the effectiveness of all the methodological improvements defined here is demonstrated on a real-world problem: the optimisation of a wind turbine blade aerofoil for both aerodynamic performance and structural stiffness. With effective scalarisation, Bayesian optimisation finds a large number of new aerofoil shapes that strongly dominate standard designs.
Scalable Deep Reinforcement Learning for Routing and Spectrum Access in Physical Layer
Cui, Wei, Yu, Wei
This paper proposes a novel and scalable reinforcement learning approach for simultaneous routing and spectrum access in wireless ad-hoc networks. In most previous works on reinforcement learning for network optimization, routing and spectrum access are tackled as separate tasks; further, the wireless links in the network are assumed to be fixed, and a different agent is trained for each transmission node -- this limits scalability and generalizability. In this paper, we account for the inherent signal-to-interference-plus-noise ratio (SINR) in the physical layer and propose a more scalable approach in which a single agent is associated with each flow. Specifically, a single agent makes all routing and spectrum access decisions as it moves along the frontier nodes of each flow. The agent is trained according to the physical layer characteristics of the environment using the future bottleneck SINR as a novel reward definition. This allows a highly effective routing strategy based on the geographic locations of the nodes in the wireless ad-hoc network. The proposed deep reinforcement learning strategy is capable of accounting for the mutual interference between the links. It learns to avoid interference by intelligently allocating spectrum slots and making routing decisions for the entire network in a scalable manner.
Action Concept Grounding Network for Semantically-Consistent Video Generation
Yu, Wei, Chen, Wenxin, Easterbrook, Steve, Garg, Animesh
Recent works in self-supervised video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the problem of semantic learning. We introduce the task of semantic action-conditional video prediction, which can be regarded as an inverse problem of action recognition. The challenge of this new task primarily lies in how to effectively inform the model of semantic action information. To bridge vision and language, we utilize the idea of capsule and propose a novel video prediction model Action Concept Grounding Network (ACGN). Our method is evaluated on two newly designed synthetic datasets, CLEVR-Building-Blocks and Sapien-Kitchen, and experiments show that given different action labels, our ACGN can correctly condition on instructions and generate corresponding future frames without need of bounding boxes. We further demonstrate our trained model can make out-of-distribution predictions for concurrent actions, be quickly adapted to new object categories and exploit its learnt features for object detection. Additional visualizations can be found at https://iclr-acgn.github.io/ACGN/.
Multi-Agent Reinforcement Learning for Adaptive User Association in Dynamic mmWave Networks
Sana, Mohamed, De Domenico, Antonio, Yu, Wei, Lostanlen, Yves, Strinati, Emilio Calvanese
Network densification and millimeter-wave technologies are key enablers to fulfill the capacity and data rate requirements of the fifth generation (5G) of mobile networks. In this context, designing low-complexity policies with local observations, yet able to adapt the user association with respect to the global network state and to the network dynamics is a challenge. In fact, the frameworks proposed in literature require continuous access to global network information and to recompute the association when the radio environment changes. With the complexity associated to such an approach, these solutions are not well suited to dense 5G networks. In this paper, we address this issue by designing a scalable and flexible algorithm for user association based on multi-agent reinforcement learning. In this approach, users act as independent agents that, based on their local observations only, learn to autonomously coordinate their actions in order to optimize the network sum-rate. Since there is no direct information exchange among the agents, we also limit the signaling overhead. Simulation results show that the proposed algorithm is able to adapt to (fast) changes of radio environment, thus providing large sum-rate gain in comparison to state-of-the-art solutions.
Recovering the parameters underlying the Lorenz-96 chaotic dynamics
Mouatadid, Soukayna, Gentine, Pierre, Yu, Wei, Easterbrook, Steve
Climate projections suffer from uncertain equilibrium climate sensitivity. The reason behind this uncertainty is the resolution of global climate models, which is too coarse to resolve key processes such as clouds and convection. These processes are approximated using heuristics in a process called parameterization. The selection of these parameters can be subjective, leading to significant uncertainties in the way clouds are represented in global climate models. Here, we explore three deep network algorithms to infer these parameters in an objective and data-driven way. We compare the performance of a fully-connected network, a one-dimensional and, a two-dimensional convolutional networks to recover the underlying parameters of the Lorenz-96 model, a non-linear dynamical system that has similar behavior to the climate system.
Towards Optimal Power Control via Ensembling Deep Neural Networks
Liang, Fei, Shen, Cong, Yu, Wei, Wu, Feng
A deep neural network (DNN) based power control method is proposed, which aims at solving the non-convex optimization problem of maximizing the sum rate of a multi-user interference channel. Towards this end, we first present PCNet, which is a multi-layer fully connected neural network that is specifically designed for the power control problem. PCNet takes the channel coefficients as input and outputs the transmit power of all users. A key challenge in training a DNN for the power control problem is the lack of ground truth, i.e., the optimal power allocation is unknown. To address this issue, PCNet leverages the unsupervised learning strategy and directly maximizes the sum rate in the training phase. Observing that a single PCNet does not globally outperform the existing solutions, we further propose ePCNet, a network ensemble with multiple PCNets trained independently. Simulation results show that for the standard symmetric multi-user Gaussian interference channel, ePCNet can outperform all state-of-the-art power control methods by 1.2%-4.6% under a variety of system configurations. Furthermore, the performance improvement of ePCNet comes with a reduced computational complexity.
The state-of-the-art in web-scale semantic information processing for cloud computing
Yu, Wei, Chen, Junpeng
Based on integrated infrastructure of resource sharing and computing in distributed environment, cloud computing involves the provision of dynamically scalable and provides virtualized resources as services over the Internet. These applications also bring a large scale heterogeneous and distributed information which pose a great challenge in terms of the semantic ambiguity. It is critical for application services in cloud computing environment to provide users intelligent service and precise information. Semantic information processing can help users deal with semantic ambiguity and information overload efficiently through appropriate semantic models and semantic information processing technology. The semantic information processing have been successfully employed in many fields such as the knowledge representation, natural language understanding, intelligent web search, etc. The purpose of this report is to give an overview of existing technologies for semantic information processing in cloud computing environment, to propose a research direction for addressing distributed semantic reasoning and parallel semantic computing by exploiting semantic information newly available in cloud computing environment.