Goto

Collaborating Authors

 representative bit



Robust Bloom Filters for Large MultiLabel Classification Tasks

Moustapha M. Cisse, Nicolas Usunier, Thierry Artières, Patrick Gallinari

Neural Information Processing Systems

This paper presents an approach to multilabel classification (MLC) with a large number of labels. Our approach is a reduction to binary classification in which label sets are represented by low dimensional binary vectors. This representation follows the principle of Bloom filters, a space-efficient data structure originally designed for approximate membership testing. We show that a naive application of Bloom filters in MLC is not robust to individual binary classifiers' errors. We then present an approach that exploits a specific feature of real-world datasets when the number of labels is large: many labels (almost) never appear together. Our approach is provably robust, has sublinear training and inference complexity with respect to the number of labels, and compares favorably to state-of-the-art algorithms on two large scale multilabel datasets.


$\partial\mathbb{B}$ nets: learning discrete functions by gradient descent

Wright, Ian

arXiv.org Artificial Intelligence

B nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent. B nets have two semantically equivalent aspects: a differentiable soft-net, with real weights, and a non-differentiable hard-net, with boolean weights. We train the soft-net by backpropagation and then'harden' the learned weights to yield boolean weights that bind with the hard-net. The result is a learned discrete function. 'Hardening' involves no loss of accuracy, unlike existing approaches to neural network binarization. Preliminary experiments demonstrate that B nets achieve comparable performance on standard machine learning problems yet are compact (due to 1-bit weights) and interpretable (due to the logical nature of the learnt functions). Neural networks are differentiable functions with weights represented by machine floats. Networks are trained by gradient descent in weight-space, where the direction of descent minimises loss. The gradients are efficiently calculated by the backpropagation algorithm (Rumelhart et al., 1986). This overall approach has led to tremendous advances in machine learning.


Robust Bloom Filters for Large MultiLabel Classification Tasks

Cisse, Moustapha M., Usunier, Nicolas, Artières, Thierry, Gallinari, Patrick

Neural Information Processing Systems

This paper presents an approach to multilabel classification (MLC) with a large number of labels. Our approach is a reduction to binary classification in which label sets are represented by low dimensional binary vectors. This representation follows the principle of Bloom filters, a space-efficient data structure originally designed for approximate membership testing. We show that a naive application of Bloom filters in MLC is not robust to individual binary classifiers' errors. We then present an approach that exploits a specific feature of real-world datasets when the number of labels is large: many labels (almost) never appear together. Our approch is provably robust, has sublinear training and inference complexity with respect to the number of labels, and compares favorably to state-of-the-art algorithms on two large scale multilabel datasets.