Goto

Collaborating Authors

 simulated data




SLOE: AFasterMethodforStatisticalInferencein High-DimensionalLogisticRegression

Neural Information Processing Systems

Recently, Sur and Candès [2019] showed that these issues can be corrected by applying a new approximation of the MLE's sampling distribution in this highdimensional regime. Unfortunately, these corrections are difficult to implement in practice, because they require an estimate of thesignal strength, which is a function of the underlying parametersβ of the logistic regression.





Reviewer # 1 1 Q1: the claim that the algorithm really manages to align the latent distributions of real and simulated data

Neural Information Processing Systems

Q1: ...the claim that the algorithm really manages to align the latent distributions of real and simulated data... We will revise the inappropriate statements in the final version. Q2: In the model adaptation phase, are state-action pairs simply sampled randomly from their respective buffers? Do you have results for a single, monolithic model? Q4: Did you investigate the reasons for the slow learning in the 500 steps on InvertedPendulum compared to PETS? Q1: The experiments shown in Figure 2 do not outperform MBPO beyond the confidence bounds.


Noise-Aware Differentially Private Regression via Meta-Learning

Neural Information Processing Systems

Many high-stakes applications require machine learning models that protect user privacy and provide well-calibrated, accurate predictions. While Differential Privacy (DP) is the gold standard for protecting user privacy, standard DP mechanisms typically significantly impair performance. One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data. In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism of Hall et al. (2013), yielding the DPConvCNP. DPConvCNP learns from simulated data how to map private data to a DP predictive model in one forward pass, and then provides accurate, well-calibrated predictions. We compare DPConvCNP with a DP Gaussian Process (GP) baseline with carefully tuned hyperparameters. The DPConvCNP outperforms the GP baseline, especially on non-Gaussian data, yet is much faster at test time and requires less tuning.


Model-based Policy Optimization with Unsupervised Model Adaptation

Neural Information Processing Systems

Model-based reinforcement learning methods learn a dynamics model with real data sampled from the environment and leverage it to generate simulated data to derive an agent. However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance. Despite much effort being devoted to reducing this distribution mismatch, existing methods fail to solve it explicitly. In this paper, we investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization. To begin with, we first derive a lower bound of the expected return, which naturally inspires a bound maximization algorithm by aligning the simulated and real data distributions. To this end, we propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation to minimize the integral probability metric (IPM) between feature distributions from real and simulated data. Instantiating our framework with Wasserstein-1 distance gives a practical model-based approach. Empirically, our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.