Goto

Collaborating Authors

 equalizer


In-Context Learning for Non-Stationary MIMO Equalization

Jiang, Jiachen, Qin, Zhen, Zhu, Zhihui

arXiv.org Artificial Intelligence

Channel equalization is fundamental for mitigating distortions such as frequency-selective fading and inter-symbol interference. Unlike standard supervised learning approaches that require costly retraining or fine-tuning for each new task, in-context learning (ICL) adapts to new channels at inference time with only a few examples. However, existing ICL-based equalizers are primarily developed for and evaluated on static channels within the context window. Indeed, to our knowledge, prior principled analyses and theoretical studies of ICL focus exclusively on the stationary setting, where the function remains fixed within the context. In this paper, we investigate the ability of ICL to address non-stationary problems through the lens of time-varying channel equalization. We employ a principled framework for designing efficient attention mechanisms with improved adaptivity in non-stationary tasks, leveraging algorithms from adaptive signal processing to guide better designs. For example, new attention variants can be derived from the Least Mean Square (LMS) adaptive algorithm, a Least Root Mean Square (LRMS) formulation for enhanced robustness, or multi-step gradient updates for improved long-term tracking. Experimental results demonstrate that ICL holds strong promise for non-stationary MIMO equalization, and that attention mechanisms inspired by classical adaptive algorithms can substantially enhance adaptability and performance in dynamic environments. Our findings may provide critical insights for developing next-generation wireless foundation models with stronger adaptability and robustness.


Semantic Channel Equalization Strategies for Deep Joint Source-Channel Coding

Pannacci, Lorenzo, Fiorellino, Simone, Pandolfo, Mario Edoardo, Strinati, Emilio Calvanese, Di Lorenzo, Paolo

arXiv.org Artificial Intelligence

Deep joint source-channel coding (DeepJSCC) has emerged as a powerful paradigm for end-to-end semantic communications, jointly learning to compress and protect task-relevant features over noisy channels. However, existing DeepJSCC schemes assume a shared latent space at transmitter (TX) and receiver (RX) - an assumption that fails in multi-vendor deployments where encoders and decoders cannot be co-trained. This mismatch introduces "semantic noise", degrading reconstruction quality and downstream task performance. In this paper, we systematize and evaluate methods for semantic channel equalization for DeepJSCC, introducing an additional processing stage that aligns heterogeneous latent spaces under both physical and semantic impairments. We investigate three classes of aligners: (i) linear maps, which admit closed-form solutions; (ii) lightweight neural networks, offering greater expressiveness; and (iii) a Parseval-frame equalizer, which operates in zero-shot mode without the need for training. Through extensive experiments on image reconstruction over AWGN and fading channels, we quantify trade-offs among complexity, data efficiency, and fidelity, providing guidelines for deploying DeepJSCC in heterogeneous AI-native wireless networks.


Novel Phase-Noise-Tolerant Variational-Autoencoder-Based Equalization Suitable for Space-Division-Multiplexed Transmission

Lauinger, Vincent, Schmitz, Lennart, Matalla, Patrick, Rode, Andrej, Randel, Sebastian, Schmalen, Laurent

arXiv.org Artificial Intelligence

We demonstrate the effectiveness of a novel phase-noise-tolerant, variational-autoencoder-based equalization scheme for space-division-multiplexed (SDM) transmission in an experiment over 150km of randomly-coupled multi-core fibers.


RIS-aided Latent Space Alignment for Semantic Channel Equalization

Hüttebräucker, Tomás, Pandolfo, Mario Edoardo, Fiorellino, Simone, Strinati, Emilio Calvanese, Di Lorenzo, Paolo

arXiv.org Artificial Intelligence

--Semantic communication systems introduce a new paradigm in wireless communications, focusing on transmitting the intended meaning rather than ensuring strict bit-level accuracy. These systems often rely on Deep Neural Networks (DNNs) to learn and encode meaning directly from data, enabling more efficient communication. However, in multi-user settings where interacting agents are trained independently--without shared context or joint optimization--divergent latent representations across AI-native devices can lead to semantic mismatches, impeding mutual understanding even in the absence of traditional transmission errors. In this work, we address semantic mismatch in Multiple-Input Multiple-Output (MIMO) channels by proposing a joint physical and semantic channel equalization framework that leverages the presence of Reconfigurable Intelligent Surfaces (RIS). The semantic equalization is implemented as a sequence of transformations: (i) a pre-equalization stage at the transmitter; (ii) propagation through the RIS-aided channel; and (iii) a post-equalization stage at the receiver . We formulate the problem as a constrained Minimum Mean Squared Error (MMSE) optimization and propose two solutions: (i) a linear semantic equalization chain, and (ii) a non-linear DNN-based semantic equalizer . Both methods are designed to operate under semantic compression in the latent space and adhere to transmit power constraints. Through extensive evaluations, we show that the proposed joint equalization strategies consistently outperform conventional, disjoint approaches to physical and semantic channel equalization across a broad range of scenarios and wireless channel conditions. Index T erms --Semantic communications, latent space alignment, reconfigurable intelligent surfaces, 6G. OR the last seven decades, communication systems have been designed with the main objective of reliably transmitting symbols through noisy communication channels, typically disregarding the interpretation and impact of these symbols upon reception. Following this principle, communication networks have achieved significant advancements in bit transmission rate and reliability, fundamental metrics for data-centric applications such as video and audio streaming, where communication itself is the primary objective.


Deep Reinforcement Learning-Based DRAM Equalizer Parameter Optimization Using Latent Representations

Usama, Muhammad, Chang, Dong Eui

arXiv.org Artificial Intelligence

Equalizer parameter optimization for signal integrity in high-speed Dynamic Random Access Memory systems is crucial but often computationally demanding or model-reliant. This paper introduces a data-driven framework employing learned latent signal representations for efficient signal integrity evaluation, coupled with a model-free Advantage Actor-Critic reinforcement learning agent for parameter optimization. The latent representation captures vital signal integrity features, offering a fast alternative to direct eye diagram analysis during optimization, while the reinforcement learning agent derives optimal equalizer settings without explicit system models. Applied to industry-standard Dynamic Random Access Memory waveforms, the method achieved significant eye-opening window area improvements: 42.7\% for cascaded Continuous-Time Linear Equalizer and Decision Feedback Equalizer structures, and 36.8\% for Decision Feedback Equalizer-only configurations. These results demonstrate superior performance, computational efficiency, and robust generalization across diverse Dynamic Random Access Memory units compared to existing techniques. Core contributions include an efficient latent signal integrity metric for optimization, a robust model-free reinforcement learning strategy, and validated superior performance for complex equalizer architectures.


In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory

Zecchin, Matteo, Raviv, Tomer, Kalathil, Dileep, Narayanan, Krishna, Shlezinger, Nir, Simeone, Osvaldo

arXiv.org Artificial Intelligence

In recent years, deep learning has facilitated the creation of wireless receivers capable of functioning effectively in conditions that challenge traditional model-based designs. Leveraging programmable hardware architectures, deep learning-based receivers offer the potential to dynamically adapt to varying channel environments. However, current adaptation strategies, including joint training, hypernetwork-based methods, and meta-learning, either demonstrate limited flexibility or necessitate explicit optimization through gradient descent. This paper presents gradient-free adaptation techniques rooted in the emerging paradigm of in-context learning (ICL). We review architectural frameworks for ICL based on Transformer models and structured state-space models (SSMs), alongside theoretical insights into how sequence models effectively learn adaptation from contextual information. Further, we explore the application of ICL to cell-free massive MIMO networks, providing both theoretical analyses and empirical evidence. Our findings indicate that ICL represents a principled and efficient approach to real-time receiver adaptation using pilot signals and auxiliary contextual information-without requiring online retraining.


Turbo-ICL: In-Context Learning-Based Turbo Equalization

Song, Zihang, Zecchin, Matteo, Rajendran, Bipin, Simeone, Osvaldo

arXiv.org Artificial Intelligence

--This paper introduces a novel in-context learning (ICL) framework, inspired by large language models (LLMs), for soft-input soft-output channel equalization in coded multiple-input multiple-output (MIMO) systems. The proposed approach learns to infer posterior symbol distributions directly from a prompt of pilot signals and decoder feedback. A key innovation is the use of prompt augmentation to incorporate extrinsic information from the decoder output as additional context, enabling the ICL model to refine its symbol estimates iteratively across turbo decoding iterations. Two model variants, based on Transformer and state-space architectures, are developed and evaluated. Extensive simulations demonstrate that, when traditional linear assumptions break down, e.g., in the presence of low-resolution quantization, ICL equalizers consistently outperform conventional model-based baselines, even when the latter are provided with perfect channel state information. Results also highlight the advantage of Transformer-based models under limited training diversity, as well as the efficiency of state-space models in resource-constrained scenarios. A. Context and Motivation Turbo equalization iteratively exchanges soft information between the equalizer and decoder to approach near-optimal decoding performance in coded communication systems [1]. Since its introduction in the 1990s [2], numerous soft-input soft-output equalizers have been developed to implement this concept.


Non-linear Equalization in 112 Gb/s PONs Using Kolmogorov-Arnold Networks

Fischer, Rodrigo, Matalla, Patrick, Randel, Sebastian, Schmalen, Laurent

arXiv.org Artificial Intelligence

They currently serve the majority of fiber broadband subscribers worldwide and an ongoing demand for bandwidth has led to recent standardization efforts that enabled 50 Gb/s line rate transmission [1], while the research community is investigating the technologies that will enable PONs beyond 100 Gb/s [2]. One possibility for achieving 100 Gb/s is the use of higher-order modulation formats in intensity-modulation and direct-detection (IM/DD) links. However, this comes at the cost of an increased signal-to-noise ratio (SNR) requirement and lower tolerance to non-linearities in the channel. In a PON, the semiconductor optical amplifiers (SOAs) used to improve the receiver sensitivity suffer from non-linear gain saturation and the electro-absorption modulator (EAM) responsible for modulating the intensity of the optical signal has a non-linear transfer function.


Recent Advances on Machine Learning-aided DSP for Short-reach and Long-haul Optical Communications

Schmalen, Laurent, Lauinger, Vincent, Ney, Jonas, Wehn, Norbert, Matalla, Patrick, Randel, Sebastian, von Bank, Alexander, Edelmann, Eike-Manuel

arXiv.org Artificial Intelligence

This is mostly due to the success of neural networks (NNs) and in particular the technique of deep learning [1]. Deep learning and the accompanying software tools have also found their way into optical communications and are now indispensable tools in the field; ML is now commonly used in all parts of fiber-optical communication networks [2]. ML is already widely used for parameter estimation in optical networks, with the goal of configuring optical network links. Due to their capacity as universal function approximators, ML algorithms and in particular NNs are also often used in the physical layer to replace suboptimal or overly complex digital signal processing (DSP) algorithms in the receiver or transmitter. The use of ML to replace parts of the transmitter or receiver, e.g., as DSP algorithms or to support forward error correction (FEC) decoding, still poses many research challenges, despite the benefits we already see. In particular, standard out-of-the-box ML solutions typically have higher computational complexity than conventional, optimized algorithms. Due to the enormous data rates at which optical communication systems operate, complexity is a major concern. The parallel structure of NNs can lead to straightforward parallelization (as in the ubiquitous graphics processing unit (GPU) implementations), which makes them attractive for implementation in optical transceivers. A future challenge will be the development of ultra-low-complexity hardware platforms with low power dissipation that can be used in highly integrated, high-speed optical transceivers.


Geometric Clustering for Hardware-Efficient Implementation of Chromatic Dispersion Compensation

Gomes, Geraldo, Freire, Pedro, Prilepsky, Jaroslaw E., Turitsyn, Sergei K.

arXiv.org Artificial Intelligence

Power efficiency remains a significant challenge in modern optical fiber communication systems, driving efforts to reduce the computational complexity of digital signal processing, particularly in chromatic dispersion compensation (CDC) algorithms. While various strategies for complexity reduction have been proposed, many lack the necessary hardware implementation to validate their benefits. This paper provides a theoretical analysis of the tap overlapping effect in CDC filters for coherent receivers, introduces a novel Time-Domain Clustered Equalizer (TDCE) technique based on this concept, and presents a Field-Programmable Gate Array (FPGA) implementation for validation. We developed an innovative parallelization method for TDCE, implementing it in hardware for fiber lengths up to 640 km. A fair comparison with the state-of-the-art frequency domain equalizer (FDE) under identical conditions is also conducted. Our findings highlight that implementation strategies, including parallelization and memory management, are as crucial as computational complexity in determining hardware complexity and energy efficiency. The proposed TDCE hardware implementation achieves up to 70.7\% energy savings and 71.4\% multiplier usage savings compared to FDE, despite its higher computational complexity.