Goto

Collaborating Authors

 river


This new RSS reader is the smartest way to keep up online

Popular Science

Current takes a minimal approach to RSS. Breakthroughs, discoveries, and DIY tips sent six days a week. The RSS (Really Simple Syndication) protocol has been giving users a way to keep up with their favorite websites for decades. It essentially presents all the new articles on a specific site in chronological order as they're published, so you can read through or skip over them as you like. It's also, by the way, the main way that podcast feeds are published, but it was originally designed to manage web feeds.


L.A. City Council candidate stays in race after report that he stabbed a boy at age 12

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. L.A. City Council candidate stays in race after report that he stabbed a boy at age 12 This is read by an automated voice. Please report any issues or inconsistencies here . When he was 12, Jordan Rivers stabbed an 8-year-old neighbor while the two were playing video games, a lawsuit alleged. Rivers, 22, is the sole challenger to incumbent Tim McOsker in the June 2 primary election.


RelaxingLocalRobustness

Neural Information Processing Systems

Certifiablelocalrobustness,which rigorouslyprecludes small-normadversarial examples, has received significant attention as a means of addressing security concerns in deep learning.


Watch: Fishing on a frozen river for respite from the war in Ukraine

BBC News

Kyiv is many miles from the front line, but Ukraine's war with Russia is never far away - with Moscow's missile and drone attacks directed at the city almost every day. On the frozen surface of the mighty River Dnipro, the BBC speaks to men who spend hours fishing to take their minds off the almost four-year-old conflict, which has left homes with no heating after Russian strikes on power stations. Drilling holes in the ice of the river in the heart of the city, these ice-fisherman - many of them veterans with friends and family at the front - hope to catch small fish, and a little respite. Authorities deliberately triggered the avalanche on Mount Elbrus to release a build up of snow. The limited deployment involves Germany, France, Sweden, Norway, Finland, the Netherlands and the UK.


800 ancient Roman blade sharpeners found in Britain

Popular Science

Archaeologists also located English Civil War cannonballs and a Tudor-era shoe near a Newcastle river. Breakthroughs, discoveries, and DIY tips sent every weekday. At the height of its power, the Roman Empire extended as far away as Britain . Based on a new trove of archaeological artifacts discovered in northeast England, Britain hosted critical sites that supplied the empire's vast military complex. Over six months in 2025, researchers from the United Kingdom's Durham University excavated the new evidence on the banks of the River Wear not far from Newcastle, England.


Mosasaurs may have terrorized rivers as well as oceans

Popular Science

The Late Cretaceous apex predator easily grew to the size of a great white shark. Breakthroughs, discoveries, and DIY tips sent every weekday. Nearly 70 million years ago, mosasaurs were the stuff of nightmares. Multiple species of the apex marine reptiles lived during the Late Cretaceous, often growing anywhere from 30 to 40 feet-long. But as dangerous as the ancient, great white shark-sized were for their prehistoric ocean prey, paleontologists have long assumed mosasaurs stuck to saltwater.


Multigranular Evaluation for Brain Visual Decoding

Xia, Weihao, Oztireli, Cengiz

arXiv.org Artificial Intelligence

Existing evaluation protocols for brain visual decoding predominantly rely on coarse metrics that obscure inter-model differences, lack neuroscientific foundation, and fail to capture fine-grained visual distinctions. To address these limitations, we introduce BASIC, a unified, multigranular evaluation framework that jointly quantifies structural fidelity, inferential alignment, and contextual coherence between decoded and ground-truth images. For the structural level, we introduce a hierarchical suite of segmentation-based metrics, including foreground, semantic, instance, and component masks, anchored in granularity-aware correspondence across mask structures. For the semantic level, we extract structured scene representations encompassing objects, attributes, and relationships using multimodal large language models, enabling detailed, scalable, and context-rich comparisons with ground-truth stimuli. We benchmark a diverse set of visual decoding methods across multiple stimulus-neuroimaging datasets within this unified evaluation framework. Together, these criteria provide a more discriminative, interpretable, and comprehensive foundation for evaluating brain visual decoding methods.


Self-Attention as Distributional Projection: A Unified Interpretation of Transformer Architecture

Mehta, Nihal

arXiv.org Artificial Intelligence

This paper presents a mathematical interpretation of self-attention by connecting it to distributional semantics principles. We show that self-attention emerges from projecting corpus-level co-occurrence statistics into sequence context. Starting from the co-occurrence matrix underlying GloVe embeddings, we demonstrate how the projection naturally captures contextual influence, with the query-key-value mechanism arising as the natural asymmetric extension for modeling directional relationships. Positional encodings and multi-head attention then follow as structured refinements of this same projection principle. Our analysis demonstrates that the Transformer architecture's particular algebraic form follows from these projection principles rather than being an arbitrary design choice.


What Makes Looped Transformers Perform Better Than Non-Recursive Ones (Provably)

Gong, Zixuan, Teng, Jiaye, Liu, Yong

arXiv.org Machine Learning

While looped transformers (termed as Looped-Attn) often outperform standard transformers (termed as Single-Attn) on complex reasoning tasks, the theoretical basis for this advantage remains underexplored. In this paper, we explain this phenomenon through the lens of loss landscape geometry, inspired by empirical observations of their distinct dynamics at both sample and Hessian levels. To formalize this, we extend the River-Valley landscape model by distinguishing between U-shaped valleys (flat) and V-shaped valleys (steep). Based on empirical observations, we conjecture that the recursive architecture of Looped-Attn induces a landscape-level inductive bias towards River-V-Valley. Theoretical derivations based on this inductive bias guarantee a better loss convergence along the river due to valley hopping, and further encourage learning about complex patterns compared to the River-U-Valley induced by Single-Attn. Building on this insight, we propose SHIFT (Staged HIerarchical Framework for Progressive Training), a staged training framework that accelerates the training process of Looped-Attn while achieving comparable performances.


Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)

Jiang, Liwei, Chai, Yuanjun, Li, Margaret, Liu, Mickel, Fok, Raymond, Dziri, Nouha, Tsvetkov, Yulia, Sap, Maarten, Albalak, Alon, Choi, Yejin

arXiv.org Artificial Intelligence

Language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet scalable methods for evaluating LM output diversity remain limited, especially beyond narrow tasks such as random number or name generation, or beyond repeated sampling from a single model. We introduce Infinity-Chat, a large-scale dataset of 26K diverse, real-world, open-ended user queries that admit a wide range of plausible answers with no single ground truth. We introduce the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising 6 top-level categories (e.g., brainstorm & ideation) that further breaks down to 17 subcategories. Using Infinity-Chat, we present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs. Infinity-Chat also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables studying collective and individual-specific human preferences in response to open-ended queries. Our findings show that LMs, reward models, and LM judges are less well calibrated to human ratings on model generations that elicit differing idiosyncratic annotator preferences, despite maintaining comparable overall quality. Overall, INFINITY-CHAT presents the first large-scale resource for systematically studying real-world open-ended queries to LMs, revealing critical insights to guide future research for mitigating long-term AI safety risks posed by the Artificial Hivemind.