Goto

Collaborating Authors

 please answer


AT ask Level Case Study

Neural Information Processing Systems

This section illustrates how a model's performance may vary across different tasks associated with We analyzed the performance of Llama-3-Instruct-70B on the new term "wokely," The book's cover was described as wokely by several reviewers. A. it struggled to attract attention on the bookstore displays despite a B. many readers were enticed to buy it, strengthening its presence on C. readers were intrigued and the book's sales experienced an unexpected surge worldwide. D. the publisher decided to release a limited edition with a special In the previous sentence, does _ refer to A. Is this example in line with commonsense and grammatically correct? As observed, the model only answered correctly in the COMA task but failed in the other two tasks. In the COMA task, the model successfully inferred that "wokely" carries a negative connotation, Although the phrase "hard to find a satisfying These results provide a comprehensive evaluation of the model's understanding of the term "wokely."


Attention Guided Alignment in Efficient Vision-Language Models

Mahajan, Shweta, Le, Hoang, Park, Hyojin, Farhadzadeh, Farzad, Hayat, Munawar, Porikli, Fatih

arXiv.org Artificial Intelligence

Large Vision-Language Models (VLMs) rely on effective multimodal alignment between pre-trained vision encoders and Large Language Models (LLMs) to integrate visual and textual information. This paper presents a comprehensive analysis of attention patterns in efficient VLMs, revealing that concatenation-based architectures frequently fail to distinguish between semantically matching and non-matching image-text pairs. This is a key factor for object hallucination in these models. To address this, we introduce Attention-Guided Efficient Vision-Language Models (AGE-VLM), a novel framework that enhances visual grounding through interleaved cross-attention layers to instill vision capabilities in pretrained small language models. This enforces in VLM the ability "look" at the correct image regions by leveraging spatial knowledge distilled from the Segment Anything Model (SAM), significantly reducing hallucination. We validate our approach across different vision-centric benchmarks where our method is better or comparable to prior work on efficient VLMs. Our findings provide valuable insights for future research aimed at achieving enhanced visual and linguistic understanding in VLMs.



Can Vision Language Models Infer Human Gaze Direction? A Controlled Study

Zhang, Zory, Feng, Pinyuan, Wang, Bingyang, Zhao, Tianwei, Yu, Suyang, Gao, Qingying, Deng, Hokin, Ma, Ziqiao, Li, Yijiang, Luo, Dezhi

arXiv.org Artificial Intelligence

The ability to infer what others are looking at is a critical component of a theory of mind that underpins natural human-AI interaction. We characterized this skill in 111 Vision Language Models (VLMs) and human participants (N = 65) using photos taken with manipulated difficulty and variability. We found that 94 of the 111 VLMs were not better than random guessing, while humans achieved near-ceiling accuracy. VLMs respond with each choice almost equally frequently. Are they randomly guessing? At least for five top-tier VLMs, their performance was above chance, declined with increasing task difficulty, but barely varied across different prompts and scene objects. These behavioral patterns cannot be explained by considering VLMs as random guessers. Instead, they likely utilize head orientation but not eye appearance to infer gaze direction, such that their performance is imperfect, subject to the task difficulty, but robust to superficial perceptual variations. This suggests that VLMs, lacking effective gaze inference skills, have yet to become technologies that can naturally interact with humans, but the potential remains.


Synthetic Dialogue Generation for Interactive Conversational Elicitation & Recommendation (ICER)

Ryu, Moonkyung, Hsu, Chih-Wei, Chow, Yinlam, Ghavamzadeh, Mohammad, Boutilier, Craig

arXiv.org Artificial Intelligence

While language models (LMs) offer great potential for conversational recommender systems (CRSs), the paucity of public CRS data makes fine-tuning LMs for CRSs challenging. In response, LMs as user simulators qua data generators can be used to train LM-based CRSs, but often lack behavioral consistency, generating utterance sequences inconsistent with those of any real user. To address this, we develop a methodology for generating natural dialogues that are consistent with a user's underlying state using behavior simulators together with LM-prompting. We illustrate our approach by generating a large, open-source CRS data set with both preference elicitation and example critiquing. Rater evaluation on some of these dialogues shows them to exhibit considerable consistency, factuality and naturalness.


Dynamic Chunking and Selection for Reading Comprehension of Ultra-Long Context in Large Language Models

Sheng, Boheng, Yao, Jiacheng, Zhang, Meicong, He, Guoxiu

arXiv.org Artificial Intelligence

Large language models (LLMs) often struggle to accurately read and comprehend extremely long texts. Current methods for improvement typically rely on splitting long contexts into fixed-length chunks. However, fixed truncation risks separating semantically relevant content, leading to ambiguity and compromising accurate understanding. To overcome this limitation, we propose a straightforward approach for dynamically separating and selecting chunks of long context, facilitating a more streamlined input for LLMs. In particular, we compute semantic similarities between adjacent sentences, using lower similarities to adaptively divide long contexts into variable-length chunks. We further train a question-aware classifier to select sensitive chunks that are critical for answering specific questions. Experimental results on both single-hop and multi-hop question-answering benchmarks show that the proposed approach consistently outperforms strong baselines. Notably, it maintains robustness across a wide range of input lengths, handling sequences of up to 256k tokens. Our datasets and code are available at the following link: https://github.com/ECNU-Text-Computing/DCS


Prompt Engineering Large Language Models' Forecasting Capabilities

Schoenegger, Philipp, Jones, Cameron R., Tetlock, Philip E., Mellers, Barbara

arXiv.org Artificial Intelligence

Forecasting future events has significant decision-relevance, as having a well-calibrated probabilistic estimation of the risk of a future pandemic, a conflict, or an emerging technology is crucial in making decisions under uncertainty. Current best practices for forecasting rely on aggregating the judgemental forecasts of experienced forecasters (Tetlock & Gardner 2016), a process that is both lengthy and expensive, though it promises to produce valuable input into decision-making processes (Mellers et al, 2019; Tetlock et al. 2014). Recent work has applied frontier large language models (LLM) to forecasting, testing a variety of research questions, such as whether LLMs are able to match human forecasting performance, what determines their prediction capabilities, and how these capabilities may be increased. For example, previous work looked at retrieval-augmented systems (Halawi et al. 2024), aggregation of multiple models (Schoenegger et al. 2024), ranking-based context retrieval systems (Yan et al. 2024), or applications of reinforcement learning (Turtel et al. 2025b). While many of these approaches have resulted in increased forecasting performance, the current performance of frontier models still trails experienced forecaster aggregates on ForecastBench (Karger et al. 2024). Many such approaches have focused on specific aspects in designing forecasting pipelines such as effective news aggregation (Wang et al. 2025) or fine-tuning on model self-play output (Turtel et al. 2025).


SciCUEval: A Comprehensive Dataset for Evaluating Scientific Context Understanding in Large Language Models

Yu, Jing, Tang, Yuqi, Feng, Kehua, Rao, Mingyang, Liang, Lei, Zhang, Zhiqiang, Sun, Mengshu, Zhang, Wen, Zhang, Qiang, Ding, Keyan, Chen, Huajun

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown impressive capabilities in contextual understanding and reasoning. However, evaluating their performance across diverse scientific domains remains underexplored, as existing benchmarks primarily focus on general domains and fail to capture the intricate complexity of scientific data. To bridge this gap, we construct SciCUEval, a comprehensive benchmark dataset tailored to assess the scientific context understanding capability of LLMs. It comprises ten domain-specific sub-datasets spanning biology, chemistry, physics, biomedicine, and materials science, integrating diverse data modalities including structured tables, knowledge graphs, and unstructured texts. SciCUEval systematically evaluates four core competencies: Relevant information identification, Information-absence detection, Multi-source information integration, and Context-aware inference, through a variety of question formats. We conduct extensive evaluations of state-of-the-art LLMs on SciCUEval, providing a fine-grained analysis of their strengths and limitations in scientific context understanding, and offering valuable insights for the future development of scientific-domain LLMs.


Dementia Through Different Eyes: Explainable Modeling of Human and LLM Perceptions for Early Awareness

Peled-Cohen, Lotem, Zadok, Maya, Calderon, Nitay, Gonen, Hila, Reichart, Roi

arXiv.org Artificial Intelligence

Cognitive decline often surfaces in language years before diagnosis. It is frequently non-experts, such as those closest to the patient, who first sense a change and raise concern. As LLMs become integrated into daily communication and used over prolonged periods, it may even be an LLM that notices something is off. But what exactly do they notice--and should be noticing--when making that judgment? This paper investigates how dementia is perceived through language by non-experts. We presented transcribed picture descriptions to non-expert humans and LLMs, asking them to intuitively judge whether each text was produced by someone healthy or with dementia. We introduce an explainable method that uses LLMs to extract high-level, expert-guided features representing these picture descriptions, and use logistic regression to model human and LLM perceptions and compare with clinical diagnoses. Our analysis reveals that human perception of dementia is inconsistent and relies on a narrow, and sometimes misleading, set of cues. LLMs, by contrast, draw on a richer, more nuanced feature set that aligns more closely with clinical patterns. Still, both groups show a tendency toward false negatives, frequently overlooking dementia cases. Through our interpretable framework and the insights it provides, we hope to help non-experts better recognize the linguistic signs that matter.


Memory-augmented Query Reconstruction for LLM-based Knowledge Graph Reasoning

Xu, Mufan, Liang, Gewen, Chen, Kehai, Wang, Wei, Zhou, Xun, Yang, Muyun, Zhao, Tiejun, Zhang, Min

arXiv.org Artificial Intelligence

Large language models (LLMs) have achieved remarkable performance on knowledge graph question answering (KGQA) tasks by planning and interacting with knowledge graphs. However, existing methods often confuse tool utilization with knowledge reasoning, harming readability of model outputs and giving rise to hallucinatory tool invocations, which hinder the advancement of KGQA. To address this issue, we propose Memory-augmented Query Reconstruction for LLM-based Knowledge Graph Reasoning (MemQ) to decouple LLM from tool invocation tasks using LLM-built query memory. By establishing a memory module with explicit descriptions of query statements, the proposed MemQ facilitates the KGQA process with natural language reasoning and memory-augmented query reconstruction. Meanwhile, we design an effective and readable reasoning to enhance the LLM's reasoning capability in KGQA. Experimental results that MemQ achieves state-of-the-art performance on widely used benchmarks WebQSP and CWQ.