Consistency Models for Scalable and Fast Simulation-Based Inference

Neural Information Processing Systems

Simulation-based inference (SBI) is constantly in search of more expressive and efficient algorithms to accurately infer the parameters of complex simulation models. In line with this goal, we present consistency models for posterior estimation (CMPE), a new conditional sampler for SBI that inherits the advantages of recent unconstrained architectures and overcomes their sampling inefficiency at inference time. CMPE essentially distills a continuous probability flow and enables rapid few-shot inference with an unconstrained architecture that can be flexibly tailored to the structure of the estimation problem. We provide hyperparameters and default architectures that support consistency training over a wide range of different dimensions, including low-dimensional ones which are important in SBI workflows but were previously difficult to tackle even with unconditional consistency models. Our empirical evaluation demonstrates that CMPE not only outperforms current state-of-the-art algorithms on hard low-dimensional benchmarks, but also achieves competitive performance with much faster sampling speed on two realistic estimation problems with high data and/or parameter dimensions.



AUCSeg: AUC-oriented Pixel-level Long-tail Semantic Segmentation Boyu Han 1,2 Zhiyong Yang 2

Neural Information Processing Systems

The Area Under the ROC Curve (AUC) is a well-known metric for evaluating instance-level long-tail learning problems. In the past two decades, many AUC optimization methods have been proposed to improve model performance under long-tail distributions. In this paper, we explore AUC optimization methods in the context of pixel-level long-tail semantic segmentation, a much more complicated scenario. This task introduces two major challenges for AUC optimization techniques. On one hand, AUC optimization in a pixel-level task involves complex coupling across loss terms, with structured inner-image and pairwise inter-image dependencies, complicating theoretical analysis. On the other hand, we find that mini-batch estimation of AUC loss in this case requires a larger batch size, resulting in an unaffordable space complexity.



Supplementary Materials for MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations

Neural Information Processing Systems

We utilize an open and widely used data format, i.e., JSON format, for the MEQA dataset. A sample within the dataset, accompanied by the data format explanation, is shown in Listing 1. " context ": " Roadside IED kills Russian major general [...] ", # The context of the question " question ": " Who died before AI - monitor reported it online?", " What event contains Al - Monitor is the communicator? " What event is after #1 has a victim? " Who died in the #2? major general, local commander, lieutenant general " The dataset and source code for the MEQA dataset have been released to GitHub: https:// github.com/du-nlp-lab/MEQA.


MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations

Neural Information Processing Systems

Existing benchmarks for multi-hop question answering (QA) primarily evaluate models based on their ability to reason about entities and the relationships between them. However, there's a lack of insight into how these models perform in terms of both events and entities.


A Diverse and Multilingual Benchmark for Cross-File Code Completion

Neural Information Processing Systems

Code completion models have made significant progress in recent years, yet current popular evaluation datasets, such as HumanEval and MBPP, predominantly focus on code completion tasks within a single file. This over-simplified setting falls short of representing the real-world software development scenario where repositories span multiple files with numerous cross-file dependencies, and accessing and understanding cross-file context is often required to complete the code correctly.


Entrywise error bounds for low-rank approximations of kernel matrices

Neural Information Processing Systems

In this paper, we derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition (or singular value decomposition). While this approximation is well-known to be optimal with respect to the spectral and Frobenius norm error, little is known about the statistical behaviour of individual entries. Our error bounds fill this gap. A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues, which takes inspiration from the field of Random Matrix Theory.



Towards Principled Graph Transformers Luis Müller Daniel Kusuma

Neural Information Processing Systems

However, such architectures often fail to deliver solid predictive performance on real-world tasks, limiting their practical impact. In contrast, global attention-based models such as graph transformers demonstrate strong performance in practice.