Goto

Collaborating Authors

 variation distance


learning

Neural Information Processing Systems

Consideranews recommendation website that, when presented with a new user, sequentially offers a selection of currently trending articles. Such asystem may only haveafewopportunities tomakerecommendations before the user decides to navigate away, leaving little time to correct for misspecified or underspecified prior knowledge.






Batches

Neural Information Processing Systems

In this paper, we find an appealing way to synthesize [JO19] and [CLM19] to give the best of both worlds: an algorithm which runs in polynomial time and can exploit structure in the underlying distribution to achieve sublinear sample complexity.



Learning from Convenience Samples: A Case Study on Fine-Tuning LLMs for Survey Non-response in the German Longitudinal Election Study

Holtdirk, Tobias, Assenmacher, Dennis, Bleier, Arnim, Wagner, Claudia

arXiv.org Artificial Intelligence

Survey researchers face two key challenges: the rising costs of probability samples and missing data (e.g., non-response or attrition), which can undermine inference and increase the use of convenience samples. Recent work explores using large language models (LLMs) to simulate respondents via persona-based prompts, often without labeled data. We study a more practical setting where partial survey responses exist: we fine-tune LLMs on available data to impute self-reported vote choice under both random and systematic nonresponse, using the German Longitudinal Election Study. We compare zero-shot prompting and supervised fine-tuning against tabular classifiers (e.g., CatBoost) and test how different convenience samples (e.g., students) used for fine-tuning affect generalization. Our results show that when data are missing completely at random, fine-tuned LLMs match tabular classifiers but outperform zero-shot approaches. When only biased convenience samples are available, fine-tuning small (3B to 8B) open-source LLMs can recover both individual-level predictions and population-level distributions more accurately than zero-shot and often better than tabular methods. This suggests fine-tuned LLMs offer a promising strategy for researchers working with non-probability samples or systematic missing-ness, and may enable new survey designs requiring only easily accessible subpopulations.