Goto

Collaborating Authors

 apprentice



Chain-of-Thought Hijacking

Zhao, Jianli, Fu, Tingchen, Schaeffer, Rylan, Sharma, Mrinank, Barez, Fazl

arXiv.org Artificial Intelligence

Large reasoning models (LRMs) achieve higher task performance with more inference-time computation, and prior works suggest this scaled reasoning may also strengthen safety by improving refusal. Yet we find the opposite: the same reasoning can be used to bypass safeguards. We introduce Chain-of-Thought Hijacking, a jailbreak attack on reasoning models. The attack pads harmful requests with long sequences of harmless puzzle reasoning. Across HarmBench, CoT Hijacking reaches a 99%, 94%, 100%, and 94% attack success rate (ASR) on Gemini 2.5 Pro, GPT o4 mini, Grok 3 mini, and Claude 4 Sonnet, respectively - far exceeding prior jailbreak methods for LRMs. To understand the effectiveness of our attack, we turn to a mechanistic analysis, which shows that mid layers encode the strength of safety checking, while late layers encode the verification outcome. Long benign CoT dilutes both signals by shifting attention away from harmful tokens. Targeted ablations of attention heads identified by this analysis causally decrease refusal, confirming their role in a safety subnetwork. These results show that the most interpretable form of reasoning - explicit CoT - can itself become a jailbreak vector when combined with final-answer cues. We release prompts, outputs, and judge decisions to facilitate replication.


Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection

Neural Information Processing Systems

Current research is primarily dedicated to advancing the accuracy of camera-only 3D object detectors (apprentice) through the knowledge transferred from LiDAR- or multi-modal-based counterparts (expert). However, the presence of the domain gap between LiDAR and camera features, coupled with the inherent incompatibility in temporal fusion, significantly hinders the effectiveness of distillation-based enhancements for apprentices. Motivated by the success of uni-modal distillation, an apprentice-friendly expert model would predominantly rely on camera features, while still achieving comparable performance to multi-modal models. To this end, we introduce VCD, a framework to improve the camera-only apprentice model, including an apprentice-friendly multi-modal expert and temporal-fusion-friendly distillation supervision. The multi-modal expert VCD-E adopts an identical structure as that of the camera-only apprentice in order to alleviate the feature disparity, and leverages LiDAR input as a depth prior to reconstruct the 3D scene, achieving the performance on par with other heterogeneous multi-modal experts. Additionally, a fine-grained trajectory-based distillation module is introduced with the purpose of individually rectifying the motion misalignment for each object in the scene.


BoxingGym: Benchmarking Progress in Automated Experimental Design and Model Discovery

Gandhi, Kanishk, Li, Michael Y., Goodyear, Lyle, Li, Louise, Bhaskar, Aditi, Zaman, Mohammed, Goodman, Noah D.

arXiv.org Artificial Intelligence

Understanding the world and explaining it with scientific theories is a central aspiration of artificial intelligence research. Proposing theories, designing experiments to test them, and then revising them based on data are fundamental to scientific discovery. Despite the significant promise of LLM-based scientific agents, no benchmarks systematically test LLM's ability to propose scientific models, collect experimental data, and revise them in light of new data. We introduce BoxingGym, a benchmark with 10 environments for systematically evaluating both experimental design (e.g. collecting data to test a scientific theory) and model discovery (e.g. proposing and revising scientific theories). To enable tractable and quantitative evaluation, we implement each environment as a generative probabilistic model with which a scientific agent can run interactive experiments. These probabilistic models are drawn from various real-world scientific domains ranging from psychology to ecology. To quantitatively evaluate a scientific agent's ability to collect informative experimental data, we compute the expected information gain (EIG), an information-theoretic quantity which measures how much an experiment reduces uncertainty about the parameters of a generative model. A good scientific theory is a concise and predictive explanation. Therefore, to quantitatively evaluate model discovery, we ask a scientific agent to explain their model and then assess whether this explanation enables another scientific agent to make reliable predictions about this environment. In addition to this explanation-based evaluation, we compute standard model evaluation metrics such as prediction errors. We find that current LLMs, such as GPT-4o, struggle with both experimental design and model discovery. We find that augmenting the LLM-based agent with an explicit statistical model does not reliably improve these results.


Reviews: Thinking Fast and Slow with Deep Learning and Tree Search

Neural Information Processing Systems

SUMMARY: The paper proposes an algorithm that combines imitation learning with tree search, which results in an apprentice learning from an ever-improving expert. A DQN is trained to learn a policy derived from an MCTS agent, with the DQN providing generalisation to unseen states. It is also then used as feedback to improve the expert, which can then be used to retrain the DQN, and so on. The paper makes two contributions: (1) a new target for imitation learning, which is empirically shown to outperform a previously-suggested target and which results in the apprentice learning a policy of equal strength to the expert. COMMENTS: I found the paper generally well-written, clear and easy to follow, barring Section 6.



Yuval Noah Harari's Apocalyptic Vision

The Atlantic - Technology

This article was featured in the One Story to Read Today newsletter. "About 14 billion years ago, matter, energy, time and space came into being." So begins Sapiens: A Brief History of Humankind (2011), by the Israeli historian Yuval Noah Harari, and so began one of the 21st century's most astonishing academic careers. Sapiens has sold more than 25 million copies in various languages. Since then, Harari has published several other books, which have also sold millions. He now employs some 15 people to organize his affairs and promote his ideas. Check out more from this issue and find your next story to read. Harari might be, after the Dalai Lama, the figure of global renown who is least online.


Apprentices to Research Assistants: Advancing Research with Large Language Models

Namvarpour, M., Razi, A.

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have emerged as powerful tools in various research domains. This article examines their potential through a literature review and firsthand experimentation. While LLMs offer benefits like cost-effectiveness and efficiency, challenges such as prompt tuning, biases, and subjectivity must be addressed. The study presents insights from experiments utilizing LLMs for qualitative analysis, highlighting successes and limitations. Additionally, it discusses strategies for mitigating challenges, such as prompt optimization techniques and leveraging human expertise. This study aligns with the 'LLMs as Research Tools' workshop's focus on integrating LLMs into HCI data work critically and ethically. By addressing both opportunities and challenges, our work contributes to the ongoing dialogue on their responsible application in research.


BRExIt: On Opponent Modelling in Expert Iteration

Hernandez, Daniel, Baier, Hendrik, Kaisers, Michael

arXiv.org Artificial Intelligence

Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We propose Best Response Expert Iteration (BRExIt), which accelerates learning in games by incorporating opponent models into the state-of-the-art learning algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping in the apprentice, with a policy head predicting opponent policies as an auxiliary task, and (2) bias opponent moves in planning towards the given or learnt opponent model, to generate apprentice targets that better approximate a best response. In an empirical ablation on BRExIt's algorithmic variants against a set of fixed test agents, we provide statistical evidence that BRExIt learns better performing policies than ExIt.


Less Data, More Knowledge: Building Next Generation Semantic Communication Networks

Chaccour, Christina, Saad, Walid, Debbah, Merouane, Han, Zhu, Poor, H. Vincent

arXiv.org Artificial Intelligence

Semantic communication is viewed as a revolutionary paradigm that can potentially transform how we design and operate wireless communication systems. However, despite a recent surge of research activities in this area, the research landscape remains limited. In this tutorial, we present the first rigorous vision of a scalable end-to-end semantic communication network that is founded on novel concepts from artificial intelligence (AI), causal reasoning, and communication theory. We first discuss how the design of semantic communication networks requires a move from data-driven networks towards knowledge-driven ones. Subsequently, we highlight the necessity of creating semantic representations of data that satisfy the key properties of minimalism, generalizability, and efficiency so as to do more with less. We then explain how those representations can form the basis a so-called semantic language. By using semantic representation and languages, we show that the traditional transmitter and receiver now become a teacher and apprentice. Then, we define the concept of reasoning by investigating the fundamentals of causal representation learning and their role in designing semantic communication networks. We demonstrate that reasoning faculties are majorly characterized by the ability to capture causal and associational relationships in datastreams. For such reasoning-driven networks, we propose novel and essential semantic communication metrics that include new "reasoning capacity" measures that could go beyond Shannon's bound to capture the convergence of computing and communication. Finally, we explain how semantic communications can be scaled to large-scale networks (6G and beyond). In a nutshell, we expect this tutorial to provide a comprehensive reference on how to properly build, analyze, and deploy future semantic communication networks.