Consistent Classification, Firm and Soft

Neural Information Processing Systems

A classifier is called consistent with respect to a given set of classlabeled pointsif it correctly classifies the set. We consider classifiers defined by unions of local separators and propose algorithms for consistent classifier reduction. The expected complexities of the proposed algorithms are derived along with the expected classifier sizes. In particular, the proposed approach yields a consistent reduction ofthe nearest neighbor classifier, which performs "firm" classification, assigning each new object to a class, regardless of the data structure. The proposed reduction method suggests a notion of "soft" classification, allowing for indecision with respect to objects which are insufficiently or ambiguously supported by the data. The performances of the proposed classifiers in predicting stockbehavior are compared to that achieved by the nearest neighbor method. 1 Introduction Certain classification problems, such as recognizing the digits of a hand written zipcode, requirethe assignment of each object to a class. Others, involving relatively small amounts of data and high risk, call for indecision until more data become available. Examples in such areas as medical diagnosis, stock trading and radar detection are well known. The training data for the classifier in both cases will correspond to firmly labeled members of the competing classes.


On Generalizing the C-Bound to the Multiclass and Multi-label Settings

arXiv.org Machine Learning

The C-bound, introduced in Lacasse et al., gives a tight upper bound on the risk of a binary majority vote classifier. In this work, we present a first step towards extending this work to more complex outputs, by providing generalizations of the C-bound to the multiclass and multi-label settings.


Bayesian Chain Classifiers for Multidimensional Classification

AAAI Conferences

In multidimensional classification the goal is to assign an instance to a set of different classes. This task is normally addressed either by defining a compound class variable with all the possible combinations of classes (label power-set methods, LPMs) or by building independent classifiers for each class (binary-relevance methods, BRMs). However, LPMs do not scale well and BRMs ignore the dependency relations between classes. We introduce a method for chaining binary Bayesian classifiers that combines the strengths of classifier chains and Bayesian networks for multidimensional classification. The method consists of two phases. In the first phase, a Bayesian network (BN) that represents the dependency relations between the class variables is learned from data. In the second phase, several chain classifiers are built, such that the order of the class variables in the chain is consistent with the class BN. At the end we combine the results of the different generated orders. Our method considers the dependencies between class variables and takes advantage of the conditional independence relations to build simplified models. We perform experiments with a chain of naive Bayes classifiers on different benchmark multidimensional datasets and show that our approach outperforms other state-of-the-art methods.



Computational Limitations in Robust Classification and Win-Win Results

arXiv.org Machine Learning

We continue the study of computational limitations in learning robust classifiers, following the recent work of Bubeck, Lee, Price and Razenshteyn. First, we demonstrate classification tasks where computationally efficient robust classifiers do not exist, even when computationally unbounded robust classifiers do. We rely on the hardness of decoding problems with preprocessing on codes and lattices. Second, we show classification tasks where efficient robust classifiers exist, but they are computationally hard to learn. Bubeck et al. showed examples of such tasks in the small-perturbation regime where the robust classifier can recover from a constant number of perturbed bits. Indeed, as we observe, the question of whether a large-perturbation robust classifier for their task exists is related to important open questions in computational number theory. We show two such classification tasks in the large-perturbation regime: the first relies on the existence of one-way functions, a minimal assumption in cryptography; and the second on the hardness of the learning parity with noise problem. For the second task, not only does a non-robust classifier exist, but also an efficient algorithm that generates fresh new labeled samples given access to polynomially many training examples (termed as generation by Kearns et. al. (1994)). Third, we show that any such task implies the existence of cryptographic primitives such as one-way functions or even forms of public-key encryption. This leads us to a win-win scenario: either we can quickly learn an efficient robust classifier (assuming one exists), or we can construct new instances of popular and useful cryptographic primitives.