Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance

Neural Information Processing Systems

From an information-theoretic perspective, a noisy transmission system such as a visual Brain-Computer Interface (BCI) speller could benefit from the use of error-correcting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase in target frequency of target stimuli, and hence a significantly reduced average target-to-target interval (TTI), leading to difficulties in classifying the individual event-related potentials (ERPs) due to overlap and refractory effects. Clearly any change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, row-column code has particular spatial properties that lead to better performance than one would expect from its TTIs and Hamming-distances alone, but nonetheless error-correcting codes can improve performance provided the right stimulus type is used.


A Novel Deep Neural Network Based Approach for Sparse Code Multiple Access

arXiv.org Machine Learning

Sparse code multiple access (SCMA) has been one of non-orthogonal multiple access (NOMA) schemes aiming to support high spectral efficiency and ubiquitous access requirements for 5G wireless communication networks. Conventional SCMA approaches are confronting remarkable challenges in designing low complexity high accuracy decoding algorithm and constructing optimum codebooks. Fortunately, the recent spotlighted deep learning technologies are of significant potentials in solving many communication engineering problems. Inspired by this, we explore approaches to improve SCMA performances with the help of deep learning methods. We propose and train a deep neural network (DNN) called DL-SCMA to learn to decode SCMA modulated signals corrupted by additive white Gaussian noise (AWGN). Putting encoding and decoding together, an autoencoder called AE-SCMA is established and trained to generate optimal SCMA codewords and reconstruct original bits. Furthermore, by manipulating the mapping vectors, an autoencoder is able to generalize SCMA, thus a dense code multiple access (DCMA) scheme is proposed. Simulations show that the DNN SCMA decoder significantly outperforms the conventional message passing algorithm (MPA) in terms of bit error rate (BER), symbol error rate (SER) and computational complexity, and AE-SCMA also demonstrates better performances via constructing better SCMA codebooks. The performance of deep learning aided DCMA is superior to the SCMA.


Deep Learning-Based Decoding for Constrained Sequence Codes

arXiv.org Machine Learning

Constrained sequence codes have been widely used in modern communication and data storage systems. Sequences encoded with constrained sequence codes satisfy constraints imposed by the physical channel, hence enabling efficient and reliable transmission of coded symbols. Traditional encoding and decoding of constrained sequence codes rely on table look-up, which is prone to errors that occur during transmission. In this paper, we introduce constrained sequence decoding based on deep learning. With multiple layer perception (MLP) networks and convolutional neural networks (CNNs), we are able to achieve low bit error rates that are close to maximum a posteriori probability (MAP) decoding as well as improve the system throughput. Moreover, implementation of capacity-achieving fixed-length codes, where the complexity is prohibitively high with table look-up decoding, becomes practical with deep learning-based decoding.


Error-Correcting Neural Sequence Prediction

arXiv.org Machine Learning

In this paper we propose a novel neural language modelling (NLM) method based on \textit{error-correcting output codes} (ECOC), abbreviated as ECOC-NLM. This latent variable based approach provides a principled way to choose a varying amount of latent output codes and avoids exact softmax normalization. Instead of minimizing measures between the predicted probability distribution and true distribution, we use error-correcting codes to represent both predictions and outputs. Secondly, we propose multiple ways to improve accuracy and convergence rates by maximizing the separability between codes that correspond to classes proportional to word embedding similarities. Lastly, we introduce a novel method called \textit{Latent Mixture Sampling}, a technique that is used to mitigate exposure bias and can be integrated into training latent-based neural language models. This involves mixing the latent codes (i.e variables) of past predictions and past targets in one of two ways: (1) according to a predefined sampling schedule or (2) a differentiable sampling procedure whereby the mixing probability is learned throughout training by replacing the greedy argmax operation with a smooth approximation. In evaluating Codeword Mixture Sampling for ECOC-NLM, we also baseline it against CWMS in a closely related Hierarhical Softmax-based NLM.


Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes

Neural Information Processing Systems

In a multi-layered neural network, anyone of the hidden layers can be viewed as computing a distributed representation of the input. Several "encoder" experiments have shown that when the representation space is small it can be fully used. But computing with such a representation requires completely dependable nodes. In the case where the hidden nodes are noisy and unreliable, we find that error correcting schemes emerge simply by using noisy units during training; random errors injected duringbackpropagation result in spreading representations apart. Average and minimum distances increase with misfire probability, as predicted by coding-theoretic considerations. Furthennore, the effect of this noise is to protect the machine against permanent node failure, thereby potentially extending the useful lifetime of the machine.