Goto

Collaborating Authors

Generative Multiple-Instance Learning Models For Quantitative Electromyography

arXiv.org Machine Learning

We present a comprehensive study of the use of generative modeling approaches for Multiple-Instance Learning (MIL) problems. In MIL a learner receives training instances grouped together into bags with labels for the bags only (which might not be correct for the comprised instances). Our work was motivated by the task of facilitating the diagnosis of neuromuscular disorders using sets of motor unit potential trains (MUPTs) detected within a muscle which can be cast as a MIL problem. Our approach leads to a state-of-the-art solution to the problem of muscle classification. By introducing and analyzing generative models for MIL in a general framework and examining a variety of model structures and components, our work also serves as a methodological guide to modelling MIL tasks. We evaluate our proposed methods both on MUPT datasets and on the MUSK1 dataset, one of the most widely used benchmarks for MIL.


Machine Teaching for Bayesian Learners in the Exponential Family

Neural Information Processing Systems

What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set.


GOT: An Optimal Transport framework for Graph comparison

arXiv.org Machine Learning

We present a novel framework based on optimal transport for the challenging problem of comparing graphs. Specifically, we exploit the probabilistic distribution of smooth graph signals defined with respect to the graph topology. This allows us to derive an explicit expression of the Wasserstein distance between graph signal distributions in terms of the graph Laplacian matrices. This leads to a structurally meaningful measure for comparing graphs, which is able to take into account the global structure of graphs, while most other measures merely observe local changes independently. Our measure is then used for formulating a new graph alignment problem, whose objective is to estimate the permutation that minimizes the distance between two graphs. We further propose an efficient stochastic algorithm based on Bayesian exploration to accommodate for the non-convexity of the graph alignment problem. We finally demonstrate the performance of our novel framework on different tasks like graph alignment, graph classification and graph signal prediction, and we show that our method leads to significant improvement with respect to the-state-of-art algorithms.


Deep Generative Learning via Variational Gradient Flow

arXiv.org Machine Learning

Learning the generative model, i.e., the underlying data generating distribution, based on large amounts of data is one the fundamental task in machine learning and statistics [46].Recent advances in deep generative models have provided novel techniques for unsupervised and semi-supervised learning, with broad application varying from image synthesis [44], semantic image editing [60], image-to-image translation [61] to low-level image processing [29]. Implicit deep generative model is a powerful and flexible framework to approximate the target distribution by learning deep samplers [38] including Generative adversarialnetworks (GAN) [16] and likelihood based models, such as variational auto-encoders (VAE) [23] and flow based methods [11], as their main representatives. The above mentioned implicit deep generative models focus on learning a deterministic or stochastic nonlinear mapping that can transform low dimensional latent samples from referenced simple distribution to samples that closely match the target distribution. GANs build a minmax two player game between the generator and discriminator. During the training, the generator transforms samples from a simple reference distribution into samples that would hopefully to deceive the discriminator, while the discriminator conducts a differential two-sample test to distinguish the generated samples from the observed samples. The objective of vanilla GANs amounts to the Jensen-Shannon (JS) divergence between the learned distribution and target distributions. The vanilla GAN generates sharp image samples but suffers form the instability issues [3]. A myriad of extensions to vanilla GANs have been investigated, both theoretically or empirically, in order to achieve a stable training and high quality sample generation.


Ensembles of Generative Adversarial Networks for Disconnected Data

arXiv.org Machine Learning

Most current computer vision datasets are composed of disconnected sets, such as images from different classes. We prove that distributions of this type of data cannot be represented with a continuous generative network without error. They can be represented in two ways: With an ensemble of networks or with a single network with truncated latent space. We show that ensembles are more desirable than truncated distributions in practice. We construct a regularized optimization problem that establishes the relationship between a single continuous GAN, an ensemble of GANs, conditional GANs, and Gaussian Mixture GANs. This regularization can be computed efficiently, and we show empirically that our framework has a performance sweet spot which can be found with hyperparameter tuning. This ensemble framework allows better performance than a single continuous GAN or cGAN while maintaining fewer total parameters.