Goto

Collaborating Authors

 ship


The tiny tuxedo cat who became a naval hero

Popular Science

A 17-year-old British sailor saved Simon from the Hong Kong docks when he was likely a year old. Breakthroughs, discoveries, and DIY tips sent six days a week. One day in March of 1948, George Hickinbottom, a British sailor, was walking around the docks of Stonecutters Island in Hong Kong. When the 17-year-old spotted a small black-and-white tuxedo cat, barely out of kittenhood, he decided to smuggle the hungry, scrawny animal aboard his ship, the HMS . Hickinbottom didn't get in trouble.


A Training Examples

Neural Information Processing Systems

Market research indicates that there is a significant opportunity for a new co ee bar located in the heart of the downtown business district.


VICoT-Agent: A Vision-Interleaved Chain-of-Thought Framework for Interpretable Multimodal Reasoning and Scalable Remote Sensing Analysis

Wang, Chujie, Luo, Zhiyuan, Liu, Ruiqi, Ran, Can, Fan, Shenghua, Chen, Xi, He, Chu

arXiv.org Artificial Intelligence

The current remote sensing image analysis task is increasingly evolving from traditional object recognition to complex intelligence reasoning, which places higher requirements on the model's reasoning ability and the flexibility of tool invocation. T o this end, we propose a new multimodal agent framework, Vision-Interleaved Chain-of-Thought Framework (VICoT), which implements explicit multi-round reasoning by dynamically incorporating visual tools into the chain of thought. Through a stack-based reasoning structure and a modular MCP-compatible tool suite, VICoT enables LLMs to efficiently perform multi-round, interleaved vision-language reasoning tasks with strong generalization and flexibility.W e also propose the Reasoning Stack distillation method to migrate complex Agent behaviors to small, lightweight models, which ensures the reasoning capability while significantly reducing complexity. Experiments on multiple remote sensing benchmarks demonstrate that VICoT significantly outperforms existing SOTA frameworks in reasoning transparency, execution efficiency, and generation quality.


Question Asking as Program Generation

Anselm Rothe, Brenden M. Lake, Todd Gureckis

Neural Information Processing Systems

A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.


Requirements for Aligned, Dynamic Resolution of Conflicts in Operational Constraints

Jones, Steven J., Wray, Robert E., Laird, John E.

arXiv.org Artificial Intelligence

Deployed, autonomous AI systems must often evaluate multiple plausible courses of action (extended sequences of behavior) in novel or under-specified contexts. Despite extensive training, these systems will inevitably encounter scenarios where no available course of action fully satisfies all operational constraints (e.g., operating procedures, rules, laws, norms, and goals). To achieve goals in accordance with human expectations and values, agents must go beyond their trained policies and instead construct, evaluate, and justify candidate courses of action. These processes require contextual "knowledge" that may lie outside prior (policy) training. This paper characterizes requirements for agent decision making in these contexts. It also identifies the types of knowledge agents require to make decisions robust to agent goals and aligned with human expectations. Drawing on both analysis and empirical case studies, we examine how agents need to integrate normative, pragmatic, and situational understanding to select and then to pursue more aligned courses of action in complex, real-world environments.



A Omitted Proofs

Neural Information Processing Systems

In this section we include all the proofs deferred from the main body. Before we proceed with the proof of Theorem 2.3, let us point out Thus, the theorem follows directly from (6).Corollary 3.1. Suppose that both players employ (OMD) with learning rate η > 0. (t 1) (t 1) (t 1) (t 1) (t 1) (t 1) ( t 1) ( t 1) ( t 1) ( t 1) (t 1) ( t 1) ( t 1) (t 1) ( t 1) (t 1) ( t 1) ( t 1) (t 1) ( t 1) Thus, recalling Definition 2.5, the claim follows from (12) and (13). The sheriff's decision is binding only in the last If the cargo has no illegal items ( i.e. After the placement, players take turns at "firing" at their The game proceeds until either one player has sunk all of the opponent's ships, or At the end of the game, each player's payoff is the The latter modification makes the game general-sum, and incentivizes players to be more risk-averse.



How the Witch of November doomed the 'Edmund Fitzgerald'

Popular Science

How the Witch of November doomed the'Edmund Fitzgerald' Fifty years after the Great Lakes freighter sank, scientists can explain the weather that still haunts Lake Superior. When the SS Edmund Fitzgerald left port on November 10, 1975, there was no way for the crew to know what they were sailing into. Breakthroughs, discoveries, and DIY tips sent every weekday. On the afternoon of November 9, 1975, when the set out on its 746-mile run from Superior, Wisconsin, to Detroit, Michigan, Lake Superior was mostly calm. Even so, the crew likely saw the red sky from the intensifying storm gathering over the Great Plains.


If you could upload your mind to a virtual utopia, would you?

New Scientist

"What does it really mean to upload your consciousness into intangible space?" In, the characters face an impossible choice: upload your mind into a virtual utopia, or crumble away in the abandoned physical world. Mind-uploading is familiar to us as a science fiction trope, often anchoring relationship dramas and philosophical inquiry. But what does it really mean to upload your consciousness into intangible space? Can the mechanics be extrapolated from our present-day science?