Goto

Collaborating Authors

 Chan, Tak-Shing T.


Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear Products

arXiv.org Machine Learning

Vector-valued neural learning has emerged as a promising direction in deep learning recently. Traditionally, training data for neural networks (NNs) are formulated as a vector of scalars; however, its performance may not be optimal since associations among adjacent scalars are not modeled. In this paper, we propose a new vector neural architecture called the Arbitrary BIlinear Product Neural Network (ABIPNN), which processes information as vectors in each neuron, and the feedforward projections are defined using arbitrary bilinear products. Such bilinear products can include circular convolution, seven-dimensional vector product, skew circular convolution, reversed- time circular convolution, or other new products not seen in previous work. As a proof-of-concept, we apply our proposed network to multispectral image denoising and singing voice sepa- ration. Experimental results show that ABIPNN gains substantial improvements when compared to conventional NNs, suggesting that associations are learned during training.


Informed Group-Sparse Representation for Singing Voice Separation

arXiv.org Machine Learning

Singing voice separation attempts to separate the vocal and instrumental parts of a music recording, which is a fundamental problem in music information retrieval. Recent work on singing voice separation has shown that the low-rank representation and informed separation approaches are both able to improve separation quality. However, low-rank optimizations are computationally inefficient due to the use of singular value decompositions. Therefore, in this paper, we propose a new linear-time algorithm called informed group-sparse representation, and use it to separate the vocals from music using pitch annotations as side information. Experimental results on the iKala dataset confirm the efficacy of our approach, suggesting that the music accompaniment follows a group-sparse structure given a pre-trained instrumental dictionary. We also show how our work can be easily extended to accommodate multiple dictionaries using the DSD100 dataset.


Polar $n$-Complex and $n$-Bicomplex Singular Value Decomposition and Principal Component Pursuit

arXiv.org Machine Learning

Informed by recent work on tensor singular value decomposition and circulant algebra matrices, this paper presents a new theoretical bridge that unifies the hypercomplex and tensor-based approaches to singular value decomposition and robust principal component analysis. We begin our work by extending the principal component pursuit to Olariu's polar $n$-complex numbers as well as their bicomplex counterparts. In so doing, we have derived the polar $n$-complex and $n$-bicomplex proximity operators for both the $\ell_1$- and trace-norm regularizers, which can be used by proximal optimization methods such as the alternating direction method of multipliers. Experimental results on two sets of audio data show that our algebraically-informed formulation outperforms tensor robust principal component analysis. We conclude with the message that an informed definition of the trace norm can bridge the gap between the hypercomplex and tensor-based approaches. Our approach can be seen as a general methodology for generating other principal component pursuit algorithms with proper algebraic structures.