Moscibroda, Thomas
An Advanced Reinforcement Learning Framework for Online Scheduling of Deferrable Workloads in Cloud Computing
Dong, Hang, Zhu, Liwen, Shan, Zhao, Qiao, Bo, Yang, Fangkai, Qin, Si, Luo, Chuan, Lin, Qingwei, Yang, Yuwen, Virdi, Gurpreet, Rajmohan, Saravan, Zhang, Dongmei, Moscibroda, Thomas
Efficient resource utilization and perfect user experience usually conflict with each other in cloud computing platforms. Great efforts have been invested in increasing resource utilization but trying not to affect users' experience for cloud computing platforms. In order to better utilize the remaining pieces of computing resources spread over the whole platform, deferrable jobs are provided with a discounted price to users. For this type of deferrable jobs, users are allowed to submit jobs that will run for a specific uninterrupted duration in a flexible range of time in the future with a great discount. With these deferrable jobs to be scheduled under the remaining capacity after deploying those on-demand jobs, it remains a challenge to achieve high resource utilization and meanwhile shorten the waiting time for users as much as possible in an online manner. In this paper, we propose an online deferrable job scheduling method called \textit{Online Scheduling for DEferrable jobs in Cloud} (\OSDEC{}), where a deep reinforcement learning model is adopted to learn the scheduling policy, and several auxiliary tasks are utilized to provide better state representations and improve the performance of the model. With the integrated reinforcement learning framework, the proposed method can well plan the deployment schedule and achieve a short waiting time for users while maintaining a high resource utilization for the platform. The proposed method is validated on a public dataset and shows superior performance.
Conservative State Value Estimation for Offline Reinforcement Learning
Chen, Liting, Yan, Jie, Shao, Zhengdao, Wang, Lu, Lin, Qingwei, Rajmohan, Saravan, Moscibroda, Thomas, Zhang, Dongmei
Offline reinforcement learning faces a significant challenge of value over-estimation due to the distributional drift between the dataset and the current learned policy, leading to learning failure in practice. The common approach is to incorporate a penalty term to reward or value estimation in the Bellman iterations. Meanwhile, to avoid extrapolation on out-of-distribution (OOD) states and actions, existing methods focus on conservative Q-function estimation. In this paper, we propose Conservative State Value Estimation (CSVE), a new approach that learns conservative V-function via directly imposing penalty on OOD states. Compared to prior work, CSVE allows more effective state value estimation with conservative guarantees and further better policy optimization. Further, we apply CSVE and develop a practical actor-critic algorithm in which the critic does the conservative value estimation by additionally sampling and penalizing the states \emph{around} the dataset, and the actor applies advantage weighted updates extended with state exploration to improve the policy. We evaluate in classic continual control tasks of D4RL, showing that our method performs better than the conservative Q-function learning methods and is strongly competitive among recent SOTA methods.
Incentive Networks
Lv, Yuezhou (IIIS, Tsinghua University) | Moscibroda, Thomas (Microsoft Research)
In a basic economic system, each participant receives a (financial) reward according to his own contribution to the system. In this work, we study an alternative approach — Incentive Networks — in which a participant's reward depends not only on his own contribution; but also in part on the contributions made by his social contacts or friends. We show that the key parameter effecting the efficiency of such an Incentive Network-based economic system depends on the participant's degree of directed altruism. Directed altruism is the extent to which someone is willing to work if his work results in a payment to his friend, rather than to himself. Specifically, we characterize the condition under which an Incentive Network-based economy is more efficient than the basic "pay-for-your-contribution" economy. We quantify by how much incentive networks can reduce the total reward that needs to be paid to the participants in order to achieve a certain overall contribution. Finally, we study the impact of the network topology and various exogenous parameters on the efficiency of incentive networks. Our results suggest that in many practical settings, Incentive Network-based reward systems or compensation structures could be more efficient than the ubiquitous 'pay-for-your-contribution' schemes.