Goto

Collaborating Authors

Recommender Systems and Deep Learning in Python

#artificialintelligence

What do I mean by "recommender systems", and why are they useful? Let's look at the top 3 websites on the Internet, according to Alexa: Google, YouTube, and Facebook. Recommender systems form the very foundation of these technologies. They are why Google is the most successful technology company today. I'm sure I'm not the only one who's accidentally spent hours on YouTube when I had more important things to do! Just how do they convince you to do that? Facebook: So powerful that world governments are worried that the newsfeed has too much influence on people!


Non Negative Matrix Factorization

#artificialintelligence

The purpose of this post is to give a simple explanation of a powerful feature extraction technique, non-negative matrix factorization.


Logically Synthesized, Hardware-Accelerated, Restricted Boltzmann Machines for Combinatorial Optimization and Integer Factorization

arXiv.org Machine Learning

The Restricted Boltzmann Machine (RBM) is a stochastic neural network capable of solving a variety of difficult tasks such as NP-Hard combinatorial optimization problems and integer factorization. The RBM architecture is also very compact; requiring very few weights and biases. This, along with its simple, parallelizable sampling algorithm for finding the ground state of such problems, makes the RBM amenable to hardware acceleration. However, training of the RBM on these problems can pose a significant challenge, as the training algorithm tends to fail for large problem sizes and efficient mappings can be hard to find. Here, we propose a method of combining RBMs together that avoids the need to train large problems in their full form. We also propose methods for making the RBM more hardware amenable, allowing the algorithm to be efficiently mapped to an FPGA-based accelerator. Using this accelerator, we are able to show hardware accelerated factorization of 16 bit numbers with high accuracy with a speed improvement of 10000x and a power improvement of 32x.


Bayesian multi-tensor factorization

arXiv.org Machine Learning

We introduce Bayesian multi-tensor factorization, a model that is the first Bayesian formulation for joint factorization of multiple matrices and tensors. The research problem generalizes the joint matrix-tensor factorization problem to arbitrary sets of tensors of any depth, including matrices, can be interpreted as unsupervised multi-view learning from multiple data tensors, and can be generalized to relax the usual trilinear tensor factorization assumptions. The result is a factorization of the set of tensors into factors shared by any subsets of the tensors, and factors private to individual tensors. We demonstrate the performance against existing baselines in multiple tensor factorization tasks in structural toxicogenomics and functional neuroimaging.


Robust Large Scale Non-negative Matrix Factorization using Proximal Point Algorithm

arXiv.org Machine Learning

A robust algorithm for non-negative matrix factorization (NMF) is presented in this paper with the purpose of dealing with large-scale data, where the separability assumption is satisfied. In particular, we modify the Linear Programming (LP) algorithm of [9] by introducing a reduced set of constraints for exact NMF. In contrast to the previous approaches, the proposed algorithm does not require the knowledge of factorization rank (extreme rays [3] or topics [7]). Furthermore, motivated by a similar problem arising in the context of metabolic network analysis [13], we consider an entirely different regime where the number of extreme rays or topics can be much larger than the dimension of the data vectors. The performance of the algorithm for different synthetic data sets are provided.