Leblanc, Benjamin
Sample Compression Hypernetworks: From Generalization Bounds to Meta-Learning
Leblanc, Benjamin, Bazinet, Mathieu, D'Amours, Nathaniel, Drouin, Alexandre, Germain, Pascal
Reconstruction functions are pivotal in sample compression theory, a framework for deriving tight generalization bounds. From a small sample of the training set (the compression set) and an optional stream of information (the message), they recover a predictor previously learned from the whole training set. While usually fixed, we propose to learn reconstruction functions. To facilitate the optimization and increase the expressiveness of the message, we derive a new sample compression generalization bound for real-valued messages. From this theoretical analysis, we then present a new hypernetwork architecture that outputs predictors with tight generalization guarantees when trained using an original meta-learning framework. The results of promising preliminary experiments are then reported.
Interpretability in Machine Learning: on the Interplay with Explainability, Predictive Performances and Models
Leblanc, Benjamin, Germain, Pascal
In some areas such as the medical field, ML-assisted predictions or decisions can drastically impact human life. For example, breast cancer [131] can be devastating if not diagnosed in time (or at all). The use of black-box predictors in these crucial cases has deceived more than once: a classical example of which is the use of the COMPAS system by the USA judiciary system for predicting criminal recidivism [133]. Other cases where fairness has been jeopardized by the use of black-boxes are numerous: job and loan applications biased toward men [40]; mortgage-approval biased toward white applicants [122]; higher credit card limits for men [172]; etc. With time, it became clear that interpretability is crucial when it comes to understanding how a predictor behaves and thus preventing unfortunate events; as pointed out by Goodman and Flaxman [70]: "If we do not know how ML [predictors] work, we cannot check or regulate them to ensure that they do not encode discrimination against minorities [...], we will not be able to learn from instances in which it is mistaken."
Seeking Interpretability and Explainability in Binary Activated Neural Networks
Leblanc, Benjamin, Germain, Pascal
We study the use of binary activated neural networks as interpretable and explainable predictors in the context of regression tasks on tabular data; more specifically, we provide guarantees on their expressiveness, present an approach based on the efficient computation of SHAP values for quantifying the relative importance of the features, hidden neurons and even weights. As the model's simplicity is instrumental in achieving interpretability, we propose a greedy algorithm for building compact binary activated networks. This approach doesn't need to fix an architecture for the network in advance: it is built one layer at a time, one neuron at a time, leading to predictors that aren't needlessly complex for a given task.
PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks with Probabilities over Representations
Fortier-Dubois, Louis, Letarte, Gaël, Leblanc, Benjamin, Laviolette, François, Germain, Pascal
Considering a probability distribution over parameters is known as an efficient strategy to learn a neural network with non-differentiable activation functions. We study the expectation of a probabilistic neural network as a predictor by itself, focusing on the aggregation of binary activated neural networks with normal distributions over real-valued weights. Our work leverages a recent analysis derived from the PAC-Bayesian framework that derives tight generalization bounds and learning procedures for the expected output value of such an aggregation, which is given by an analytical expression. While the combinatorial nature of the latter has been circumvented by approximations in previous works, we show that the exact computation remains tractable for deep but narrow neural networks, thanks to a dynamic programming approach. This leads us to a peculiar bound minimization learning algorithm for binary activated neural networks, where the forward pass propagates probabilities over representations instead of activation values. A stochastic counterpart that scales to wide architectures is proposed.