Goto

Collaborating Authors

 freq




Expected FrequencyMatricesofElections: Computation,Geometry,andPreferenceLearning

Neural Information Processing Systems

Computational social choice is a research area at the intersection of social choice (the science of collective decision-making) and computer science, which focuses on the algorithmic analysis of problems related topreference aggregation and elicitation(Brandt etal.,2013).


Stanford Sleep Bench: Evaluating Polysomnography Pre-training Methods for Sleep Foundation Models

Kjaer, Magnus Ruud, Thapa, Rahul, Ganjoo, Gauri, Moore, Hyatt IV, Jennum, Poul Joergen, Westover, Brandon M., Zou, James, Mignot, Emmanuel, He, Bryan, Brink-Kjaer, Andreas

arXiv.org Artificial Intelligence

Polysomnography (PSG), the gold standard test for sleep analysis, generates vast amounts of multimodal clinical data, presenting an opportunity to leverage self-supervised representation learning (SSRL) for pre-training foundation models to enhance sleep analysis. However, progress in sleep foundation models is hindered by two key limitations: (1) the lack of a shared dataset and benchmark with diverse tasks for training and evaluation, and (2) the absence of a systematic evaluation of SSRL approaches across sleep-related tasks. To address these gaps, we introduce Stanford Sleep Bench, a large-scale PSG dataset comprising 17,467 recordings totaling over 163,000 hours from a major sleep clinic, including 13 clinical disease prediction tasks alongside canonical sleep-related tasks such as sleep staging, apnea diagnosis, and age estimation. We systematically evaluate SSRL pre-training methods on Stanford Sleep Bench, assessing downstream performance across four tasks: sleep staging, apnea diagnosis, age estimation, and disease and mortality prediction. Our results show that multiple pretraining methods achieve comparable performance for sleep staging, apnea diagnosis, and age estimation. However, for mortality and disease prediction, contrastive learning significantly outperforms other approaches while also converging faster during pretraining. To facilitate reproducibility and advance sleep research, we will release Stanford Sleep Bench along with pretrained model weights, training pipelines, and evaluation code.


Curiosity Meets Cooperation: A Game-Theoretic Approach to Long-Tail Multi-Label Learning

Xiao, Canran, Zhao, Chuangxin, Ke, Zong, Shen, Fei

arXiv.org Artificial Intelligence

The per-label distribution is typically long-tailed (Tarekegn et al., 2021; De Alvis and Seneviratne, 2024): head labels dominate while tail labels appear sporadically. This imbalance is exacerbated in MLC because (i) co-occurring labels make resampling risky, and (ii) metrics like mAP favor head labels. As a result, standard optimizers (Ridnik et al., 2021) often learn head-biased boundaries, achieving high scores while failing on tail labels-problematic for safety-critical applications. In practice the per-label sample counts follow a heavy-tailed distribution: a handful of head labels dominate the data, whereas the vast majority of tail labels appear only sporadically, as shown in Figure 1. This long-tail imbalance (Tarekegn et al., 2021; De Alvis and Seneviratne, 2024) is particularly severe in the multi-label regime because (i) multiple labels co-occur within a single instance, so naïve resampling can destroy cross-label correlations, and (ii) evaluation metrics such as mAP or micro-F1 are disproportionately influenced by head labels, starving tail classes of gradient signal. Consequently, conventional optimizers (Ridnik et al., 2021) that target average loss or accuracy often learn a head-biased decision boundary, yielding high headline scores while silently failing on the tail-an outcome that is unacceptable in safety-critical or comprehensive retrieval scenarios(Barandas et al., 2024).


Test-Time Efficient Pretrained Model Portfolios for Time Series Forecasting

Kayaalp, Mert, Turkmen, Caner, Shchur, Oleksandr, Mercado, Pedro, Ansari, Abdul Fatir, Bohlke-Schneider, Michael, Wang, Bernie

arXiv.org Artificial Intelligence

Is bigger always better for time series foundation models? With the question in mind, we explore an alternative to training a single, large monolithic model: building a portfolio of smaller, pretrained forecasting models. By applying ensembling or model selection over these portfolios, we achieve competitive performance on large-scale benchmarks using much fewer parameters. We explore strategies for designing such portfolios and find that collections of specialist models consistently outperform portfolios of independently trained generalists. Remarkably, we demonstrate that post-training a base model is a compute-effective approach for creating sufficiently diverse specialists, and provide evidences that ensembling and model selection are more compute-efficient than test-time fine-tuning.


We would like to thank the reviewers for their valuable feedback, which we will duly consider and integrate in our

Neural Information Processing Systems

In this paper, we demonstrate that "the decision boundaries of a DNN can only exist as long We clarify the main points raised by the reviewers here below. We further shed more light on the relationship between adv. Nevertheless, we never claim that, within the discr. In fact, we agree that the margin associated to different discr. Overall, however, we firmly believe that the invariant dirs.




Spatial-Frequency Aware for Object Detection in RAW Image

Ye, Zhuohua, Zhang, Liming, Han, Hongru

arXiv.org Artificial Intelligence

Direct RAW-based object detection offers great promise by utilizing RAW data (unprocessed sensor data), but faces inherent challenges due to its wide dynamic range and linear response, which tends to suppress crucial object details. In particular, existing enhancement methods are almost all performed in the spatial domain, making it difficult to effectively recover these suppressed details from the skewed pixel distribution of RAW images. To address this limitation, we turn to the frequency domain, where features, such as object contours and textures, can be naturally separated based on frequency. In this paper, we propose Space-Frequency Aware RAW Image Object Detection Enhancer (SFAE), a novel framework that synergizes spatial and frequency representations. Our contribution is threefold. The first lies in the ``spatialization" of frequency bands. Different from the traditional paradigm of directly manipulating abstract spectra in deep networks, our method inversely transforms individual frequency bands back into tangible spatial maps, thus preserving direct physical intuition. Then the cross-domain fusion attention module is developed to enable deep multimodal interactions between these maps and the original spatial features. Finally, the framework performs adaptive nonlinear adjustments by predicting and applying different gamma parameters for the two domains.