turboae
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States > Washington > King County > Seattle (0.05)
- Europe > Austria > Vienna (0.04)
- (3 more...)
Response to Reviewer 1
We thank the reviewers for their insightful comments. Please find the detailed responses below. TurboAE has a linear complexity in the block length, in runtime and computation. TurboAE and traditional turbo codes run on both GPU and CPU in the revision. We agree with the reviewer that the statement is confusing and misleading.
- North America > United States > Washington > King County > Seattle (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Austria > Vienna (0.04)
- (3 more...)
Response to Reviewer 1
We thank the reviewers for their insightful comments. Please find the detailed responses below. TurboAE has a linear complexity in the block length, in runtime and computation. TurboAE and traditional turbo codes run on both GPU and CPU in the revision. We agree with the reviewer that the statement is confusing and misleading.
Component Training of Turbo Autoencoders
Clausius, Jannis, Geiselhart, Marvin, Brink, Stephan ten
Isolated training with Gaussian priors (TGP) of the component autoencoders of turbo-autoencoder architectures enables faster, more consistent training and better generalization to arbitrary decoding iterations than training based on deep unfolding. We propose fitting the components via extrinsic information transfer (EXIT) charts to a desired behavior which enables scaling to larger message lengths ($k \approx 1000$) while retaining competitive performance. To the best of our knowledge, this is the first autoencoder that performs close to classical codes in this regime. Although the binary cross-entropy (BCE) loss function optimizes the bit error rate (BER) of the components, the design via EXIT charts enables to focus on the block error rate (BLER). In serially concatenated systems the component-wise TGP approach is well known for inner components with a fixed outer binary interface, e.g., a learned inner code or equalizer, with an outer binary error correcting code. In this paper we extend the component training to structures with an inner and outer autoencoder, where we propose a new 1-bit quantization strategy for the encoder outputs based on the underlying communication problem. Finally, we discuss the model complexity of the learned components during design time (training) and inference and show that the number of weights in the encoder can be reduced by 99.96 %.
ProductAE: Towards Training Larger Channel Codes based on Neural Product Codes
Jamali, Mohammad Vahid, Saber, Hamid, Hatami, Homayoon, Bae, Jung Hyun
There have been significant research activities in recent years to automate the design of channel encoders and decoders via deep learning. Due the dimensionality challenge in channel coding, it is prohibitively complex to design and train relatively large neural channel codes via deep learning techniques. Consequently, most of the results in the literature are limited to relatively short codes having less than 100 information bits. In this paper, we construct ProductAEs, a computationally efficient family of deep-learning driven (encoder, decoder) pairs, that aim at enabling the training of relatively large channel codes (both encoders and decoders) with a manageable training complexity. We build upon the ideas from classical product codes, and propose constructing large neural codes using smaller code components. More specifically, instead of directly training the encoder and decoder for a large neural code of dimension $k$ and blocklength $n$, we provide a framework that requires training neural encoders and decoders for the code parameters $(n_1,k_1)$ and $(n_2,k_2)$ such that $n_1 n_2=n$ and $k_1 k_2=k$. Our training results show significant gains, over all ranges of signal-to-noise ratio (SNR), for a code of parameters $(225,100)$ and a moderate-length code of parameters $(441,196)$, over polar codes under successive cancellation (SC) decoder. Moreover, our results demonstrate meaningful gains over Turbo Autoencoder (TurboAE) and state-of-the-art classical codes. This is the first work to design product autoencoders and a pioneering work on training large channel codes.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- North America > United States > Massachusetts (0.04)