Goto

Collaborating Authors

 deep neural connection


Reviews: Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Neural Information Processing Systems

Here are my comments for the paper: - B2N, RAI, and GGT abbreviations are never defined in the paper; the have been just cited from previous works (minor). A short background section on these methods can also include their full name. As far as I understand, the proposed method is B2N with B-RAI instead of RAI which was originally proposed in [25]. This allows the model to sample multiple generative and discriminative structures, and as a result create an ensemble of networks with possibly different structures and parameters. Maybe a better way for structuring the paper is to have a background section on B-RAI and B2N, and a separate section on BRAINet in which the distinction with other works and contribution is clearly written.


Reviews: Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Neural Information Processing Systems

This paper proposes BRAINet as to combine Bayesian structure learning and Bayesian neural networks. In detail, the method assumes a confounder on the input features X and the discriminative network parameters \phi, where this confounder is defined as the generative graph structure on X, and the discriminative network shares the same structure as the generative one. Given observations X and Y, the approach first sample the generative graph structure from the posterior given X, then train the parameters of the corresponding discriminative network in order to fit the posterior distribution of phi given X and Y. Experiments are performed on calibration and OOD tasks, with MC-dropout and deep Ensembles as the main comparing baselines. Reviewers include experts in Bayesian structure learning and Bayesian neural networks. They read the author feedback carefully and engaged in post-rebuttal discussion actively.


Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections

Neural Information Processing Systems

Modeling uncertainty in deep neural networks, despite recent important advances, is still an open problem. Bayesian neural networks are a powerful solution, where the prior over network weights is a design choice, often a normal distribution or other distribution encouraging sparsity. However, this prior is agnostic to the generative process of the input data, which might lead to unwarranted generalization for out-of-distribution tested data. We suggest the presence of a confounder for the relation between the input data and the discriminative function given the target label. We propose an approach for modeling this confounder by sharing neural connectivity patterns between the generative and discriminative networks.