Goto

Collaborating Authors

 zero-shot


Uncovering the Hidden Dynamics of Video Self-supervised Learning under Distribution Shifts

Neural Information Processing Systems

Specifically, we pose and answer the following questions: Q1. How do the learned spatial and temporal representations vary based on different VSSL pretrain-ing methodologies? How robust are these representations to different distribution shifts?




From Atomic to Composite: Reinforcement Learning Enables Generalization in Complementary Reasoning

Cheng, Sitao, Yin, Xunjian, Zhou, Ruiwen, Li, Yuxuan, Wang, Xinyi, Pan, Liangming, Wang, William Yang, Zhong, Victor

arXiv.org Artificial Intelligence

Reinforcement Learning (RL) following Supervised Fine-Tuning (SFT) has become the standard paradigm for post-training Large Language Models (LLMs). However, the mechanism by which RL contributes to reasoning capabilities-- whether it incentivizes the synthesis of new skills or merely amplifies existing behaviors--remains a subject of intense debate. In this work, we investigate this question through the lens of Complementary Reasoning, a complex task that requires integrating internal parametric knowledge with external contextual information. Using a controlled synthetic dataset of human biographies, we strictly decouple this ability into two atomic skills: Parametric Reasoning (relying on internal knowledge encoded in model parameters) and Contextual Reasoning (depending on novel information provided in the context window). To rigorously assess capability boundaries, we evaluate generalization across three distinct levels of difficulty: I.I.D., Composition, and Zero-shot settings. We find that while SFT is sufficient for in-distribution performance, it struggles with out-of-distribution generalization, particularly in Zero-shot settings where relational combinations are novel. Crucially, we identify the SFT Generalization Paradox: Models supervised solely on the composite task achieve near-perfect in-distribution accuracy (90%) but collapse on out-of-distribution generalization (18%), indicating their reliance on rote memorization of path shortcuts. In contrast, we find that RL acts as a reasoning synthesizer rather than a probability amplifier. However, we uncover a strict atomic prerequisite: RL can only synthesize these complex strategies if the base model has first mastered the independent atomic skills (Parametric and Contextual) via SFT. These findings challenge the view of RL as a mere amplifier, suggesting that given sufficient atomic foundations, RL can actively synthesize complex reasoning strategies from learned primitives without explicit supervision on such complex strategies. This indicates that decoupled atomic training followed by RL offers a scalable path to generalization for complex reasoning tasks. Code and data will be at https://github.com/sitaocheng/from The rapid evolution of Large Language Models (LLMs) has been fundamentally driven by advanced post-training strategies, specifically an initial Supervised Fine-Tuning (SFT) stage followed by a Reinforcement Learning (RL) stage (Achiam et al., 2023; Team et al., 2024; Guo et al., 2025). While SFT is effective at establishing behavioral norms and imparting foundational knowledge, it fundamentally relies on maximum likelihood estimation, which tends to favor the memorization of the training distribution.


Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs

Hong, Yunqi, An, Sohyun, Bai, Andrew, Lin, Neil Y. C., Hsieh, Cho-Jui

arXiv.org Artificial Intelligence

Despite Multimodal Large Language Models (MLLMs) showing promising results on general zero-shot image classification tasks, fine-grained image classification remains challenging. It demands precise attention to subtle visual details to distinguish between visually similar subcategories--details that MLLMs may easily overlook without explicit guidance. To address this, we introduce AutoSEP, an iterative self-supervised prompt learning framework designed to enhance MLLM fine-grained classification capabilities in a fully unsupervised manner. Our core idea is to leverage unlabeled data to learn a description prompt that guides MLLMs in identifying crucial discriminative features within an image, and boosts classification accuracy. We developed an automatic self-enhancing prompt learning framework called AutoSEP to iteratively improve the description prompt using unlabeled data, based on instance-level classification scoring function. AutoSEP only requires black-box access to MLLMs, eliminating the need for any training or fine-tuning. We evaluate our approach on multiple fine-grained classification datasets. It consistently outperforms other unsupervised baselines, demonstrating the effectiveness of our self-supervised optimization framework. Notably, AutoSEP on average improves 13 percent over standard zero-shot classification and 5 percent over the best-performing baselines. Code is available at: https://github.com/yq-hong/AutoSEP


MusRec: Zero-Shot Text-to-Music Editing via Rectified Flow and Diffusion Transformers

Boudaghi, Ali, Zare, Hadi

arXiv.org Artificial Intelligence

--Music editing has emerged as an important and practical area of artificial intelligence, with applications ranging from video game and film music production to personalizing existing tracks according to user preferences. However, existing models face significant limitations, such as being restricted to editing synthesized music generated by their own models, requiring highly precise prompts, or necessitating task-specific retraining--thus lacking true zero-shot capability. Experimental results demonstrate that our approach outperforms existing methods in preserving musical content, structural consistency, and editing fidelity, establishing a strong foundation for controllable music editing in real-world scenarios. The landscape of audio generation has shifted dramatically in recent years. Text-to-music systems now allow users to compose entire musical pieces from simple textual descriptions, powered by advances in diffusion models and transformer architectures [1]-[11]. While impressive, these systems are still primarily designed for creation from scratch . In contrast, real-world music practice often revolves around editing: refining a performance, altering instrumentation, or adapting an existing recording into a new style. For musicians, producers, and casual creators alike, the ability to reshape existing audio is often more valuable than generating entirely new material. Music editing, however, is fundamentally more difficult than generation. It requires the model to balance two competing goals: applying the requested modification faithfully, and preserving the rich details of the input recording that should remain unchanged. This trade-off is especially challenging when dealing with expressive, polyphonic, or multi-instrumental recordings. Existing research has attempted to address editing through supervised datasets of paired "before" and "after" examples [12]-[14], or through zero-shot latent manipulations in diffusion models [15]-[17]. Y et most methods remain restricted by their limitation to specific editing tasks, operate mainly on model-generated music rather than arbitrary recordings, and often require very precise prompts to succeed [15], [17].


Zero-Shot Referring Expression Comprehension via Vison-Language True/False Verification

Liu, Jeffrey, Hu, Rongbin

arXiv.org Artificial Intelligence

Referring Expression Comprehension (REC) is usually addressed with task-trained grounding models. We show that a zero-shot workflow, without any REC-specific training, can achieve competitive or superior performance. Our approach reformulates REC as box-wise visual-language verification: given proposals from a COCO-clean generic detector (YOLO-World), a general-purpose VLM independently answers True/False queries for each region. This simple procedure reduces cross-box interference, supports abstention and multiple matches, and requires no fine-tuning. On RefCOCO, RefCOCO+, and RefCOCOg, our method not only surpasses a zero-shot GroundingDINO baseline but also exceeds reported results for GroundingDINO trained on REC and GroundingDINO+CRG. Controlled studies with identical proposals confirm that verification significantly outperforms selection-based prompting, and results hold with open VLMs. Overall, we show that workflow design, rather than task-specific pretraining, drives strong zero-shot REC performance.


AlignSurvey: A Comprehensive Benchmark for Human Preferences Alignment in Social Surveys

Lin, Chenxi, Yuan, Weikang, Jiang, Zhuoren, Huang, Biao, Zhang, Ruitao, Ge, Jianan, Xu, Yueqian, Yu, Jianxing

arXiv.org Artificial Intelligence

Understanding human attitudes, preferences, and behaviors through social surveys is essential for academic research and policymaking. Y et traditional surveys face persistent challenges, including fixed-question formats, high costs, limited adaptability, and difficulties ensuring cross-cultural equivalence. While recent studies explore large language models (LLMs) to simulate survey responses, most are limited to structured questions, overlook the entire survey process, and risks under-representing marginalized groups due to training data biases. We introduce AlignSurvey, the first benchmark that systematically replicates and evaluates the full social survey pipeline using LLMs. It defines four tasks aligned with key survey stages: social role modeling, semi-structured interview modeling, attitude stance modeling and survey response modeling. It also provides task-specific evaluation metrics to assess alignment fidelity, consistency, and fairness at both individual and group levels, with a focus on demographic diversity. To support AlignSurvey, we construct a multi-tiered dataset architecture: (i) the Social Foundation Corpus, a cross-national resource with 44K+ interview dialogues and 400K+ structured survey records; and (ii) a suite of Entire-Pipeline Survey Datasets, including the expert-annotated AlignSurvey-Expert (ASE) and two nationally representative surveys for cross-cultural evaluation. We release the SurveyLM family, obtained through two-stage fine-tuning of open-source LLMs, and offer reference models for evaluating domain-specific alignment. All datasets, models, and tools are available at github and huggingface to support transparent and socially responsible research.


Generalisation Bounds of Zero-Shot Economic Forecasting using Time Series Foundation Models

Jetwiriyanon, Jittarin, Susnjak, Teo, Ranathunga, Surangika

arXiv.org Artificial Intelligence

This study investigates zero-shot forecasting capabilities of Time Series Foundation Models (TSFMs) for macroeconomic indicators. We apply TSFMs to forecasting economic indicators under univariate conditions, bypassing the need for train bespoke econometric models using and extensive training datasets. Our experiments were conducted on a case study dataset, without additional customisation. We rigorously back-tested three state-of-the-art TSFMs (Chronos, TimeGPT and Moirai) under data-scarce conditions and structural breaks. Our results demonstrate that appropriately engineered TSFMs can internalise rich economic dynamics, accommodate regime shifts, and deliver well-behaved uncertainty estimates out of the box, while matching state-of-the-art multivariate models on this domain. Our findings suggest that, without any fine-tuning, TSFMs can match or exceed classical models during stable economic conditions. However, they are vulnerable to degradation in performances during periods of rapid shocks. The findings offer guidance to practitioners on when zero-shot deployments are viable for macroeconomic monitoring and strategic planning.


AutoPDL: Automatic Prompt Optimization for LLM Agents

Spiess, Claudio, Vaziri, Mandana, Mandel, Louis, Hirzel, Martin

arXiv.org Artificial Intelligence

The performance of large language models (LLMs) depends on how they are prompted, with choices spanning both the high-level prompting pattern (e.g., Zero-Shot, CoT, ReAct, ReWOO) and the specific prompt content (instructions and few-shot demonstrations). Manually tuning this combination is tedious, error-prone, and specific to a given LLM and task. Therefore, this paper proposes AutoPDL, an automated approach to discovering good LLM agent configurations. Our approach frames this as a structured AutoML problem over a combinatorial space of agentic and non-agentic prompting patterns and demonstrations, using successive halving to efficiently navigate this space. We introduce a library implementing common prompting patterns using the PDL prompt programming language. AutoPDL solutions are human-readable, editable, and executable PDL programs that use this library. This approach also enables source-to-source optimization, allowing human-in-the-loop refinement and reuse. Evaluations across three tasks and seven LLMs (ranging from 3B to 70B parameters) show consistent accuracy gains ($9.21\pm15.46$ percentage points), up to 67.5pp, and reveal that selected prompting strategies vary across models and tasks.