Goto

Collaborating Authors

 test log-likelihood







feedback of all reviewers testifying our approach to be "advancing the field of deep Gaussian processes " (R2) and

Neural Information Processing Systems

We thank all reviewers for their careful reading and their detailed and constructive comments. We first address the shared reviewer comments and then individual ones. As suggested by R2, we also compared MF to FC DGP leading to similar results (see new table). Train-test split (R2) We are the first to study the extrapolation behaviour of DGPs. S2 and will move it to the main paper to facilitate comparison to related work.


A Sample-dependent Baselines in REBAR and RELAX We start with the REINFORCE estimator with the sample-dependent baseline b k: 1 K

Neural Information Processing Systems

H controlled by the parameter . To form modified RELAX in Section 6.3, we replace The results are shown in Figure 5 . In fact, for this V AE architecture, the per-iteration time of RODEO is 25.2ms, which is very close to the 23.1ms of RLOO. We do not observe significant difference between the two versions of RODEO. Throughout, we call this the "test log-likelihood bound."



General Table Completion using a Bayesian Nonparametric Model

Isabel Valera, Zoubin Ghahramani

Neural Information Processing Systems

Even though heterogeneous databases can be found in a broad variety of applications, there exists a lack of tools for estimating missing data in such databases. In this paper, we provide an efficient and robust table completion tool, based on a Bayesian nonparametric latent feature model. In particular, we propose a general observation model for the Indian buffet process (IBP) adapted to mixed continuous (real-valued and positive real-valued) and discrete (categorical, ordinal and count) observations. Then, we propose an inference algorithm that scales linearly with the number of observations. Finally, our experiments over five real databases show that the proposed approach provides more robust and accurate estimates than the standard IBP and the Bayesian probabilistic matrix factorization with Gaussian observations.


Reviews: Modelling heterogeneous distributions with an Uncountable Mixture of Asymmetric Laplacians

Neural Information Processing Systems

The additional results are very helpful for evaluating the method, although I would have liked to see the a similar plot as Figure 3 in Tagasovska and Lopez-Paz [1]. I find the calibration of UMAL predictions on room-price forecasting for BCN quite convincing. These results, along with the calibration on UCI, have resolved my concerns about calibration of the UMAL method. The test log-likelihoods on UCI are less interesting, but it is good that UMAL performs as expected. For instance, the fact that UMAL outperforms the similar Independent ALD method is nice to see.