Goto

Collaborating Authors

 Shekhtman, Eliot


Strategic Usage in a Multi-Learner Setting

arXiv.org Artificial Intelligence

Real-world systems often involve some pool of users choosing between a set of services. With the increase in popularity of online learning algorithms, these services can now self-optimize, leveraging data collected on users to maximize some reward such as service quality. On the flipside, users may strategically choose which services to use in order to pursue their own reward functions, in the process wielding power over which services can see and use their data. Extensive prior research has been conducted on the effects of strategic users in single-service settings, with strategic behavior manifesting in the manipulation of observable features to achieve a desired classification; however, this can often be costly or unattainable for users and fails to capture the full behavior of multi-service dynamic systems. As such, we analyze a setting in which strategic users choose among several available services in order to pursue positive classifications, while services seek to minimize loss functions on their observations. We focus our analysis on realizable settings, and show that naive retraining can still lead to oscillation even if all users are observed at different times; however, if this retraining uses memory of past observations, convergent behavior can be guaranteed for certain loss function classes. We provide results obtained from synthetic and real-world data to empirically validate our theoretical findings.


Latent Diffusion for Language Generation

arXiv.org Artificial Intelligence

Diffusion models have achieved great success in modeling continuous data modalities such as images, audio, and video, but have seen limited use in discrete domains such as language. Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing pretrained language models. We view diffusion and existing language models as complementary. We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders. We then demonstrate that continuous diffusion models can be learned in the latent space of the language autoencoder, enabling us to sample continuous latent representations that can be decoded into natural language with the pretrained decoder. We validate the effectiveness of our approach for unconditional, class-conditional, and sequence-to-sequence language generation. We demonstrate across multiple diverse data sets that our latent language diffusion models are significantly more effective than previous diffusion language models.


Online Missing Value Imputation and Correlation Change Detection for Mixed-type Data via Gaussian Copula

arXiv.org Machine Learning

Most data science algorithms require complete observations, yet many datasets contain missing values. Hence missing value imputation is crucial for real-world data science workflows. For practical applications, imputation algorithms should produce imputations that match the true data distribution, handle mixed data containing ordinal, boolean, and continuous variables, and scale to large datasets. In this work we develop a new online imputation algorithm for mixed data using the Gaussian copula. The online Gaussian copula model produces meets all the desiderata: its imputations match the data distribution even for mixed data, and it scales well, achieving up to an order of magnitude speedup over its offline counterpart. The online algorithm can handle streaming or sequential data and can adapt to a changing data distribution. By fitting the copula model to online data, we also provide a new method to detect a change in the correlational structure of multivariate mixed data with missing values. Experimental results on synthetic and real world data validate the performance of the proposed methods.