Goto

Collaborating Authors

 neurips




OntheAccuracyofInfluenceFunctions forMeasuringGroupEffects

Neural Information Processing Systems

Influence functions estimate the effect of removing a training point on a model without theneedtoretrain. Theyarebasedonafirst-order Taylorapproximation thatisguaranteed tobeaccurate forsufficiently small changes tothemodel, and so are commonly used to study the effect of individual points in large datasets. However, we often want to study the effects of largegroups of training points, e.g., todiagnose batch effects orapportion credit between different data sources.


Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning Zachary Charles

Neural Information Processing Systems

We introduce Dataset Grouper, a library to create large-scale group-structured (e.g., federated) datasets, enabling federated learning simulation at the scale of foundation models. This library facilitates the creation of group-structured versions of existing datasets based on user-specified partitions, and directly leads to a variety of useful heterogeneous datasets that can be plugged into existing software frameworks. Dataset Grouper offers three key advantages. First, it scales to settings where even a single group's dataset is too large to fit in memory. Second, it provides flexibility, both in choosing the base (non-partitioned) dataset and in defining partitions.


In search of the next generation of multimodal datasets

Neural Information Processing Systems

While these advances use different algorithmic techniques, e.g., contrastive learning, diffusion, or auto-regressive modeling, they all rest on a common foundation: large datasets containing paired image-text examples.



Image Understanding Makes for A Good Tokenizer for Image Generation Luting Wang Y ang Zhao

Neural Information Processing Systems

Modern image generation (IG) models have been shown to capture rich semantics valuable for image understanding (IU) tasks. However, the potential of IU models to improve IG performance remains uncharted. We address this issue using a token-based IG framework, which relies on effective tokenizers to map images into token sequences. Currently, pixel reconstruction (e.g., VQGAN) dominates the training objective for tokenizers. In contrast, our approach adopts the feature reconstruction objective, where tokenizers are trained by distilling knowledge from pretrained IU encoders. Comprehensive comparisons indicate that tokeniz-ers with strong IU capabilities achieve superior IG performance across a variety of metrics, datasets, tasks, and proposal networks.


cdf1035c34ec380218a8cc9a43d438f9-AuthorFeedback.pdf

Neural Information Processing Systems

R2 considered our method requiring a "discretized proxy." First of all, a different, more challenging optimization problem is studied in our work. The variables in the16 barycenter problem we consider include not only the individual transport plan from each source to the barycenter,17 but importantly also the barycenter itself. Wewould33 like to point out that there are three accepted papers at NeurIPS last year inspired by Wasserstein barycenters. These are37 challenging questions that depend on the specific structure of parameterization and the particular recovery method.38


697200c9d1710c2799720b660abd11bb-Paper-Conference.pdf

Neural Information Processing Systems

Bayesian model evidence gives a clear criteria for such model selection. However, computing model evidence requires integration over the likelihood, which is challenging, particularly when the likelihood is non-closed-form and/or expensive.