Goto

Collaborating Authors

 finetune




Distilled Wasserstein Learning for Word Embedding and Topic Modeling

Hongteng Xu, Wenlin Wang, Wei Liu, Lawrence Carin

Neural Information Processing Systems

Theworddistributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning thetopic model, weleverage adistilled underlying distance matrix toupdate the topic distributions and smoothly calculate the corresponding optimal transports.


OntheEffectivenessofLipschitz-DrivenRehearsal inContinualLearning-SupplementaryMaterial

Neural Information Processing Systems

If α > β, we are overemphasizing the contribution of the first term of Eq. 9 (which brings each layer'sλk1 andck close toeach other) overthesecond one(which induces small Lipschitz targets).







SABR: A Stable Adaptive Bitrate Framework Using Behavior Cloning Pretraining and Reinforcement Learning Fine-Tuning

Luo, Pengcheng, Zhao, Yunyang, Zhang, Bowen, Yang, Genke, Soong, Boon-Hee, Yuen, Chau

arXiv.org Artificial Intelligence

With the advent of 5G, the internet has entered a new video-centric era. From short-video platforms like TikTok to long-video platforms like Bilibili, online video services are reshaping user consumption habits. Adaptive Bitrate (ABR) control is widely recognized as a critical factor influencing Quality of Experience (QoE). Recent learning-based ABR methods have attracted increasing attention. However, most of them rely on limited network trace sets during training and overlook the wide-distribution characteristics of real-world network conditions, resulting in poor generalization in out-of-distribution (OOD) scenarios. To address this limitation, we propose SABR, a training framework that combines behavior cloning (BC) pretraining with reinforcement learning (RL) fine-tuning. We also introduce benchmarks, ABRBench-3G and ABRBench-4G+, which provide wide-coverage training traces and dedicated OOD test sets for assessing robustness to unseen network conditions. Experimental results demonstrate that SABR achieves the best average rank compared with Pensieve, Comyco, and NetLLM across the proposed benchmarks. These results indicate that SABR enables more stable learning across wide distributions and improves generalization to unseen network conditions.