Goto

Collaborating Authors

NVIDIA Research: Tensors Are the Future of Deep Learning

#artificialintelligence

This post discusses tensor methods, how they are used in NVIDIA, and how they are central to the next generation of AI algorithms. Tensors, which generalize matrices to more than two dimensions, are everywhere in modern machine learning. From deep neural networks features to videos or fMRI data, the structure in these higher-order tensors is often crucial. Deep neural networks typically map between higher-order tensors. In fact, it is the ability of deep convolutional neural networks to preserve and leverage local structure that made the current levels of performance possible, along with large datasets and efficient hardware. Tensor methods enable you to preserve and leverage that structure further, for individual layers or whole networks.


Fast and Guaranteed Tensor Decomposition via Sketching

Neural Information Processing Systems

Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized com- putation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as ten- sor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iter- ative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic mod- eling and obtain competitive results.


PyTorch Tensor Basics

@machinelearnbot

Now that we know WTF a tensor is, and saw how Numpy's ndarray can be used to represent them, let's switch gears and see how they are represented in PyTorch. PyTorch has made an impressive dent on the machine learning scene since Facebook open-sourced it in early 2017. It may not have the widespread adoption that TensorFlow has -- which was initially released well over a year prior, enjoys the backing of Google, and had the luxury of establishing itself as the gold standard as a new wave of neural networking tools was being ushered in -- but the attention that PyTorch receives in the research community especially is quite real. Much of this attention comes both from its relationship to Torch proper, and its dynamic computation graph. As excited as I have recently been by turning my own attention to PyTorch, this is not really a PyTorch tutorial; it's more of an introduction to PyTorch's Tensor class, which is reasonably analogous to Numpy's ndarray.


Fast and Guaranteed Tensor Decomposition via Sketching

arXiv.org Machine Learning

Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.


Learning from Binary Multiway Data: Probabilistic Tensor Decomposition and its Statistical Optimality

arXiv.org Machine Learning

An important reason for such an increase is the effective representation of multiway data using a tensor structure. One example is the recommender system (Bi et al., 2018), which can be naturally described as a three-way tensor of user item context and each entry indicates the user-item interaction. Another example is the DBLP database (Zhe et al., 2016), which is organized into a three-way tensor of author word venue and each entry indicates the co-occurrence of the triplets. Whereas many real-world multiway datasets have continuous-valued entries, there have recently emerged more instances of binary tensors, in which all tensor entries are binary indicators 0/1. Examples include click/no-click action in recommender systems (Sun et al., 2017), multi-relational social networks (Nickel et al., 2011), and brain structural connectivity networks (Wang et al., 2017a).