Goto

Collaborating Authors

 tnnr


Twin Neural Network Regression is a Semi-Supervised Regression Algorithm

Wetzel, Sebastian J., Melko, Roger G., Tamblyn, Isaac

arXiv.org Artificial Intelligence

Twin neural network regression (TNNR) is a semi-supervised regression algorithm, it can be trained on unlabelled data points as long as other, labelled anchor data points, are present. TNNR is trained to predict differences between the target values of two different data points rather than the targets themselves. By ensembling predicted differences between the targets of an unseen data point and all training data points, it is possible to obtain a very accurate prediction for the original regression problem. Since any loop of predicted differences should sum to zero, loops can be supplied to the training data, even if the data points themselves within loops are unlabelled. Semi-supervised training improves TNNR performance, which is already state of the art, significantly.


Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion

Pan, Chenjian, Ling, Chen, He, Hongjin, Qi, Liqun, Xu, Yanwei

arXiv.org Machine Learning

Tensor completion refers to the task of estimating the missing data from an incomplete measurement or observation, which is a core problem frequently arising from the areas of big data analysis, computer vision, and network engineering. Due to the multidimensional nature of high-order tensors, the matrix approaches, e.g., matrix factorization and direct matricization of tensors, are often not ideal for tensor completion and recovery. Exploiting the potential periodicity and inherent correlation properties appeared in real-world tensor data, in this paper, we shall incorporate the low-rank and sparse regularization technique to enhance Tucker decomposition for tensor completion. A series of computational experiments on real-world datasets, including internet traffic data, color images, and face recognition, show that our model performs better than many existing state-of-the-art matricization and tensorization approaches in terms of achieving higher recovery accuracy. Naturally, these data would be stored as where rank(F) represents the rank of the underlying matrix higher-order tensor (a.k.a., multidimensional array), which F and M R Following the spirit of matrix completion model [1], [2], [3], [4], to name just a few.