Goto

Collaborating Authors

 Di Benedetto, Giuseppe


Double Clipping: Less-Biased Variance Reduction in Off-Policy Evaluation

arXiv.org Artificial Intelligence

They allow us to estimate the performance of a new target recommendation policy based on interaction data logged from a different logging policy (for instance, the current production recommender), thereby reducing the need to run slow and costly A/B tests. Many counterfactual off-policy estimators are based on the inverse propensity scoring (IPS) principle [2, 6, 7, 12]. Given a stochastic logging policy and some mild assumptions, IPS-based estimators are unbiased, but often suffer from high variance. This is true even on industrial-scale data sizes; in particular, if the logging policy is close to being deterministic.


Non-exchangeable feature allocation models with sublinear growth of the feature sizes

arXiv.org Machine Learning

Feature allocation models are popular models used in different applications such as unsupervised learning or network modeling. In particular, the Indian buffet process is a flexible and simple one-parameter feature allocation model where the number of features grows unboundedly with the number of objects. The Indian buffet process, like most feature allocation models, satisfies a symmetry property of exchangeability: the distribution is invariant under permutation of the objects. While this property is desirable in some cases, it has some strong implications. Importantly, the number of objects sharing a particular feature grows linearly with the number of objects. In this article, we describe a class of non-exchangeable feature allocation models where the number of objects sharing a given feature grows sublinearly, where the rate can be controlled by a tuning parameter. We derive the asymptotic properties of the model, and show that such model provides a better fit and better predictive performances on various datasets.


Non-exchangeable random partition models for microclustering

arXiv.org Machine Learning

Many popular random partition models, such as the Chinese restaurant process and its two-parameter extension, fall in the class of exchangeable random partitions, and have found wide applicability in model-based clustering, population genetics, ecology or network analysis. While the exchangeability assumption is sensible in many cases, it has some strong implications. In particular, Kingman's representation theorem implies that the size of the clusters necessarily grows linearly with the sample size; this feature may be undesirable for some applications, as recently pointed out by Miller et al. (2015). We present here a flexible class of non-exchangeable random partition models which are able to generate partitions whose cluster sizes grow sublinearly with the sample size, and where the growth rate is controlled by one parameter. Along with this result, we provide the asymptotic behaviour of the number of clusters of a given size, and show that the model can exhibit a power-law behavior, controlled by another parameter. The construction is based on completely random measures and a Poisson embedding of the random partition, and inference is performed using a Sequential Monte Carlo algorithm. Additionally, we show how the model can also be directly used to generate sparse multigraphs with power-law degree distributions and degree sequences with sublinear growth. Finally, experiments on real datasets emphasize the usefulness of the approach compared to a two-parameter Chinese restaurant process.