Goto

Collaborating Authors

 i-mle




Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

Neural Information Processing Systems

Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.


A Standard Maximum Likelihood Estimation and Links to I

Neural Information Processing Systems

In the standard MLE setting [see, e.g., Murphy, 2012, Ch. 9] we are interested in learning the These two definitions are, however, essentially equivalent. Eq. (15) is a smooth objective that can be optimized with a (stochastic) gradient descent procedure. This section contains the proofs of the results relative to the perturb and map section (Section 3.2) and The proposition now follows from arguments made in Papandreou and Y uille [2011] Its moment generating function has the form E[exp(tX)] = Γ(1 τt). As mentioned in Johnson and Balakrishnan [p. Parts of the proof are inspired by a post on stackexchange Xi'an [2016].Theorem 1.



Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

Neural Information Processing Systems

Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP.


Learning Discrete Directed Acyclic Graphs via Backpropagation

Wren, Andrew J., Minervini, Pasquale, Franceschi, Luca, Zantedeschi, Valentina

arXiv.org Artificial Intelligence

Recently continuous relaxations have been proposed in order to learn Directed Acyclic Graphs (DAGs) from data by backpropagation, instead of using combinatorial optimization. However, a number of techniques for fully discrete backpropagation could instead be applied. In this paper, we explore that direction and propose DAG-DB, a framework for learning DAGs by Discrete Backpropagation. Based on the architecture of Implicit Maximum Likelihood Estimation [I-MLE, arXiv:2106.01798], DAG-DB adopts a probabilistic approach to the problem, sampling binary adjacency matrices from an implicit probability distribution. DAG-DB learns a parameter for the distribution from the loss incurred by each sample, performing competitively using either of two fully discrete backpropagation techniques, namely I-MLE and Straight-Through Estimation.


Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

Niepert, Mathias, Minervini, Pasquale, Franceschi, Luca

arXiv.org Artificial Intelligence

Integrating discrete probability distributions and combinatorial optimization problems into neural networks has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable: it only requires the ability to compute the most probable states; and does not rely on smooth relaxations. The framework encompasses several approaches, such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.