Goto

Collaborating Authors

 Marchand, Mario


A PAC-Bayes approach to the Set Covering Machine

Neural Information Processing Systems

We design a new learning algorithm for the Set Covering Machine from a PAC-Bayes perspective and propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non trivial margin-sparsity tradeoff.


A PAC-Bayes approach to the Set Covering Machine

Neural Information Processing Systems

We design a new learning algorithm for the Set Covering Machine froma PAC-Bayes perspective and propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non trivial margin-sparsity tradeoff.


PAC-Bayes Learning of Conjunctions and Classification of Gene-Expression Data

Neural Information Processing Systems

We propose a "soft greedy" learning algorithm for building small conjunctions of simple threshold functions, called rays, defined on single real-valued attributes. We also propose a PAC-Bayes risk bound which is minimized for classifiers achieving a nontrivial tradeoff between sparsity (the number of rays used) and the magnitude of the separating margin of each ray. Finally, we test the soft greedy algorithm on four DNA micro-array data sets.


PAC-Bayes Learning of Conjunctions and Classification of Gene-Expression Data

Neural Information Processing Systems

We propose a "soft greedy" learning algorithm for building small conjunctions of simple threshold functions, called rays, defined on single real-valued attributes. We also propose a PAC-Bayes risk bound which is minimized for classifiers achieving a nontrivial tradeoff between sparsity (the number of rays used) and the magnitude ofthe separating margin of each ray. Finally, we test the soft greedy algorithm on four DNA micro-array data sets.


The Decision List Machine

Neural Information Processing Systems

We introduce a new learning algorithm for decision lists to allow features that are constructed from the data and to allow a tradeoff between accuracy and complexity. We bound its generalization error in terms of the number of errors and the size of the classifier it finds on the training data. We also compare its performance on some natural data sets with the set covering machine and the support vector machine.


The Decision List Machine

Neural Information Processing Systems

We introduce a new learning algorithm for decision lists to allow features that are constructed from the data and to allow a tradeoff betweenaccuracy and complexity. We bound its generalization error in terms of the number of errors and the size of the classifier it finds on the training data. We also compare its performance on some natural data sets with the set covering machine and the support vector machine.





Learning Stochastic Perceptrons Under k-Blocking Distributions

Neural Information Processing Systems

Such distributions represent an important stepbeyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic perceptron we mean a perceptron which,upon presentation of input vector x, outputs 1 with probability fCLJi WiXi - B).