Turbo-ICL: In-Context Learning-Based Turbo Equalization
Song, Zihang, Zecchin, Matteo, Rajendran, Bipin, Simeone, Osvaldo
–arXiv.org Artificial Intelligence
--This paper introduces a novel in-context learning (ICL) framework, inspired by large language models (LLMs), for soft-input soft-output channel equalization in coded multiple-input multiple-output (MIMO) systems. The proposed approach learns to infer posterior symbol distributions directly from a prompt of pilot signals and decoder feedback. A key innovation is the use of prompt augmentation to incorporate extrinsic information from the decoder output as additional context, enabling the ICL model to refine its symbol estimates iteratively across turbo decoding iterations. Two model variants, based on Transformer and state-space architectures, are developed and evaluated. Extensive simulations demonstrate that, when traditional linear assumptions break down, e.g., in the presence of low-resolution quantization, ICL equalizers consistently outperform conventional model-based baselines, even when the latter are provided with perfect channel state information. Results also highlight the advantage of Transformer-based models under limited training diversity, as well as the efficiency of state-space models in resource-constrained scenarios. A. Context and Motivation Turbo equalization iteratively exchanges soft information between the equalizer and decoder to approach near-optimal decoding performance in coded communication systems [1]. Since its introduction in the 1990s [2], numerous soft-input soft-output equalizers have been developed to implement this concept.
arXiv.org Artificial Intelligence
May-12-2025
- Country:
- Asia > China (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America
- Mexico > Gulf of Mexico (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Telecommunications (0.46)
- Technology: