Goto

Collaborating Authors

 Lu, Liyang


Collaborative Channel Access and Transmission for NR Sidelink and Wi-Fi Coexistence over Unlicensed Spectrum

arXiv.org Artificial Intelligence

With the rapid development of various internet of things (IoT) applications, including industrial IoT (IIoT) and visual IoT (VIoT), the demand for direct device-to-device communication to support high data rates continues to grow. To address this demand, 5G-Advanced has introduced sidelink communication over the unlicensed spectrum (SL-U) to increase data rates. However, the primary challenge of SL-U in the unlicensed spectrum is ensuring fair coexistence with other incumbent systems, such as Wi-Fi. In this paper, we address the challenge by designing channel access mechanisms and power control strategies to mitigate interference and ensure fair coexistence. First, we propose a novel collaborative channel access (CCHA) mechanism that integrates channel access with resource allocation through collaborative interactions between base stations (BS) and SL-U users. This mechanism ensures fair coexistence with incumbent systems while improving resource utilization. Second, to further enhance the performance of the coexistence system, we develop a cooperative subgoal-based hierarchical deep reinforcement learning (C-GHDRL) algorithm framework. The framework enables SL-U users to make globally optimal decisions by leveraging cooperative operations between the BS and SL-U users, effectively overcoming the limitations of traditional optimization methods in solving joint optimization problems with nonlinear constraints. Finally, we mathematically model the joint channel access and power control problem and balance the trade-off between fairness and transmission rate in the coexistence system by defining a suitable reward function in the C-GHDRL algorithm. Simulation results demonstrate that the proposed scheme significantly enhances the performance of the coexistence system while ensuring fair coexistence between SL-U and Wi-Fi users.


Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization

arXiv.org Artificial Intelligence

This paper presents Z-Code++, a new pre-trained language model optimized for abstractive text summarization. The model extends the state of the art encoder-decoder model using three techniques. First, we use a two-phase pre-training process to improve model's performance on low-resource summarization tasks. The model is first pre-trained using text corpora for language understanding, and then is continually pre-trained on summarization corpora for grounded text generation. Second, we replace self-attention layers in the encoder with disentangled attention layers, where each word is represented using two vectors that encode its content and position, respectively. Third, we use fusion-in-encoder, a simple yet effective method of encoding long sequences in a hierarchical manner. Z-Code++ creates new state of the art on 9 out of 13 text summarization tasks across 5 languages. Our model is parameter-efficient in that it outperforms the 600x larger PaLM-540B on XSum, and the finetuned 200x larger GPT3-175B on SAMSum. In zero-shot and few-shot settings, our model substantially outperforms the competing models.


Scalable and Efficient MoE Training for Multitask Multilingual Models

arXiv.org Artificial Intelligence

The Mixture of Experts (MoE) models are an emerging class of sparsely activated deep learning models that have sublinear compute costs with respect to their parameters. In contrast with dense models, the sparse architecture of MoE offers opportunities for drastically growing model size with significant accuracy gain while consuming much lower compute budget. However, supporting large scale MoE training also has its own set of system and modeling challenges. To overcome the challenges and embrace the opportunities of MoE, we first develop a system capable of scaling MoE models efficiently to trillions of parameters. It combines multi-dimensional parallelism and heterogeneous memory technologies harmoniously with MoE to empower 8x larger models on the same hardware compared with existing work. Besides boosting system efficiency, we also present new training methods to improve MoE sample efficiency and leverage expert pruning strategy to improve inference time efficiency. By combining the efficient system and training methods, we are able to significantly scale up large multitask multilingual models for language generation which results in a great improvement in model accuracy. A model trained with 10 billion parameters on 50 languages can achieve state-of-the-art performance in Machine Translation (MT) and multilingual natural language generation tasks. The system support of efficient MoE training has been implemented and open-sourced with the DeepSpeed library.