Goto

Collaborating Authors

Zhao, Qibin



Deep Multimodal Multilinear Fusion with High-order Polynomial Pooling

Neural Information Processing Systems

Tensor-based multimodal fusion techniques have exhibited great predictive performance. However, one limitation is that existing approaches only consider bilinear or trilinear pooling, which fails to unleash the complete expressive power of multilinear fusion with restricted orders of interactions. More importantly, simply fusing features all at once ignores the complex local intercorrelations, leading to the deterioration of prediction. In this work, we first propose a polynomial tensor pooling (PTP) block for integrating multimodal features by considering high-order moments, followed by a tensorized fully connected layer. Treating PTP as a building block, we further establish a hierarchical polynomial fusion network (HPFN) to recursively transmit local correlations into global ones.


Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization

arXiv.org Machine Learning

There has been an increased interest in multimodal language processing including multimodal dialog, question answering, sentiment analysis, and speech recognition. However, naturally occurring multimodal data is often imperfect as a result of imperfect modalities, missing entries or noise corruption. To address these concerns, we present a regularization method based on tensor rank minimization. Our method is based on the observation that high-dimensional multimodal time series data often exhibit correlations across time and modalities which leads to low-rank tensor representations. However, the presence of noise or incomplete values breaks these correlations and results in tensor representations of higher rank. We design a model to learn such tensor representations and effectively regularize their rank. Experiments on multimodal language data show that our model achieves good results across various levels of imperfection.


AI Neurotechnology for Aging Societies -- Task-load and Dementia EEG Digital Biomarker Development Using Information Geometry Machine Learning Methods

arXiv.org Artificial Intelligence

Dementia and especially Alzheimer's disease (AD) are the most common causes of cognitive decline in elderly people. A spread of the above mentioned mental health problems in aging societies is causing a significant medical and economic burden in many countries around the world. According to a recent World Health Organization (WHO) report, it is approximated that currently, worldwide, about 47 million people live with a dementia spectrum of neurocognitive disorders. This number is expected to triple by 2050, which calls for possible application of AI-based technologies to support an early screening for preventive interventions and a subsequent mental wellbeing monitoring as well as maintenance with so-called digital-pharma or beyond a pill therapeutical approaches. This paper discusses our attempt and preliminary results of brainwave (EEG) techniques to develop digital biomarkers for dementia progress detection and monitoring. We present an information geometry-based classification approach for automatic EEG-derived event related responses (ERPs) discrimination of low versus high task-load auditory or tactile stimuli recognition, of which amplitude and latency variabilities are similar to those in dementia. The discussed approach is a step forward to develop AI, and especially machine learning (ML) approaches, for the subsequent application to mild-cognitive impairment (MCI) and AD diagnostics.


Low-Rank Embedding of Kernels in Convolutional Neural Networks under Random Shuffling

arXiv.org Machine Learning

Although the convolutional neural networks (CNNs) have become popular for various image processing and computer vision task recently, it remains a challenging problem to reduce the storage cost of the parameters for resource-limited platforms. In the previous studies, tensor decomposition (TD) has achieved promising compression performance by embedding the kernel of a convolutional layer into a low-rank subspace. However the employment of TD is naively on the kernel or its specified variants. Unlike the conventional approaches, this paper shows that the kernel can be embedded into more general or even random low-rank subspaces. We demonstrate this by compressing the convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a standard classification task using CIFAR-10. In addition, we analyze how the spatial similarity of the training data influences the low-rank structure of the kernels. The experimental results show that the CNN can be significantly compressed even if the kernels are randomly shuffled. Furthermore, the RsTD-based method yields more stable classification accuracy than the conventional TD-based methods in a large range of compression ratios.


Exact Recovery of Low-rank Tensor Decomposition under Reshuffling

arXiv.org Machine Learning

Low-rank tensor decomposition is a promising approach for analysis and understanding of real-world data. Many such analyses require correct recovery of the true latent factors, but the conditions of exact recovery are not known for many existing tensor decomposition methods. In this paper, we derive such conditions for a general class of tensor decomposition methods where each latent tensor component can be reshuffled into a low-rank matrix of arbitrary shape. The reshuffling operation generalizes the traditional unfolding operation, and provides flexibility to recover true latent factors of complex data-structures. We prove that exact recovery can be guaranteed by using a convex program when a type of incoherence measure is upper bounded. The results on image steganography show that our method obtains the state-of-the-art performance. The theoretical analysis in this paper is expected to be useful to derive similar results for other types of tensor-decomposition methods.


Tensor Ring Decomposition with Rank Minimization on Latent Space: An Efficient Approach for Tensor Completion

arXiv.org Machine Learning

In tensor completion tasks, the traditional low-rank tensor decomposition models suffer from laborious model selection problem due to high model sensitivity. Especially for tensor ring (TR) decomposition, the number of model possibility grows exponentially with the tensor order, which makes it rather challenging to find the optimal TR decomposition. In this paper, by exploiting the low-rank structure on TR latent space, we propose a novel tensor completion method, which is robust to model selection. In contrast to imposing low-rank constraint on the data space, we introduce nuclear norm regularization on the latent TR factors, resulting in that the optimization step using singular value decomposition (SVD) can be performed at a much smaller scale. By leveraging the alternating direction method of multipliers (ADMM) scheme, the latent TR factors with optimal rank and the recovered tensor can be obtained simultaneously. Our proposed algorithm effectively alleviates the burden of TR-rank selection, therefore the computational cost is greatly reduced. The extensive experimental results on synthetic data and real-world data demonstrate the superior high performance and efficiency of the proposed approach against the state-of-the-art algorithms.


Brain-Computer Interface with Corrupted EEG Data: A Tensor Completion Approach

arXiv.org Machine Learning

One of the current issues in Brain-Computer Interface is how to deal with noisy Electroencephalography measurements organized as multidimensional datasets. On the other hand, recently, significant advances have been made in multidimensional signal completion algorithms that exploit tensor decomposition models to capture the intricate relationship among entries in a multidimensional signal. We propose to use tensor completion applied to EEG data for improving the classification performance in a motor imagery BCI system with corrupted measurements. Noisy measurements are considered as unknowns that are inferred from a tensor decomposition model. We evaluate the performance of four recently proposed tensor completion algorithms plus a simple interpolation strategy, first with random missing entries and then with missing samples constrained to have a specific structure (random missing channels), which is a more realistic assumption in BCI Applications. We measured the ability of these algorithms to reconstruct the tensor from observed data. Then, we tested the classification accuracy of imagined movement in a BCI experiment with missing samples. We show that for random missing entries, all tensor completion algorithms can recover missing samples increasing the classification performance compared to a simple interpolation approach. For the random missing channels case, we show that tensor completion algorithms help to reconstruct missing channels, significantly improving the accuracy in the classification of motor imagery, however, not at the same level as clean data. Tensor completion algorithms are useful in real BCI applications. The proposed strategy could allow using motor imagery BCI systems even when EEG data is highly affected by missing channels and/or samples, avoiding the need of new acquisitions in the calibration stage.


A generative adversarial framework for positive-unlabeled classification

arXiv.org Machine Learning

In this work, we consider the task of classifying the binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal re-weighting strategy for U data, so that a decent decision boundary can be found. In contrast, we provide a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GANs). Our generative positive-unlabeled (GPU) learning model is devised to express P and N data distributions. It comprises of three discriminators and two generators with different roles, producing both positive and negative samples that resemble those come from the real training dataset. Even with rather limited labeled P data, our GPU framework is capable of capturing the underlying P and N data distribution with infinite realistic sample streams. In this way, an optimal classifier can be trained on those generated samples using a very deep neural networks (DNNs). Moreover, an useful variant of GPU is also introduced for semi-supervised classification.


Tensorizing Generative Adversarial Nets

arXiv.org Machine Learning

Generative Adversarial Network (GAN) and its variants demonstrate state-of-the-art performance in the class of generative models. To capture higher dimensional distributions, the common learning procedure requires high computational complexity and large number of parameters. In this paper, we present a new generative adversarial framework by representing each layer as a tensor structure connected by multilinear operations, aiming to reduce the number of model parameters by a large factor while preserving the quality of generalized performance. To learn the model, we develop an efficient algorithm by alternating optimization of the mode connections. Experimental results demonstrate that our model can achieve high compression rate for model parameters up to 40 times as compared to the existing GAN.