Goto

Collaborating Authors

 circulation


AI use in American newspapers is widespread, uneven, and rarely disclosed

Russell, Jenna, Karpinska, Marzena, Akinode, Destiny, Thai, Katherine, Emi, Bradley, Spero, Max, Iyyer, Mohit

arXiv.org Artificial Intelligence

AI is rapidly transforming journalism, but the extent of its use in published newspaper articles remains unclear. We address this gap by auditing a large-scale dataset of 186K articles from online editions of 1.5K American newspapers published in the summer of 2025. Using Pangram, a state-of-the-art AI detector, we discover that approximately 9% of newly-published articles are either partially or fully AI-generated. This AI use is unevenly distributed, appearing more frequently in smaller, local outlets, in specific topics such as weather and technology, and within certain ownership groups. We also analyze 45K opinion pieces from Washington Post, New York Times, and Wall Street Journal, finding that they are 6.4 times more likely to contain AI-generated content than news articles from the same publications, with many AI-flagged op-eds authored by prominent public figures. Despite this prevalence, we find that AI use is rarely disclosed: a manual audit of 100 AI-flagged articles found only five disclosures of AI use. Overall, our audit highlights the immediate need for greater transparency and updated editorial standards regarding the use of AI in journalism to maintain public trust.


Bridging Idealized and Operational Models: An Explainable AI Framework for Earth System Emulators

Behnoudfar, Pouria, Moser, Charlotte, Bocquet, Marc, Cheng, Sibo, Chen, Nan

arXiv.org Artificial Intelligence

Computer models are indispensable tools for understanding the Earth system. While high-resolution operational models have achieved many successes, they exhibit persistent biases, particularly in simulating extreme events and statistical distributions. In contrast, coarse-grained idealized models isolate fundamental processes and can be precisely calibrated to excel in characterizing specific dynamical and statistical features. However, different models remain siloed by disciplinary boundaries. By leveraging the complementary strengths of models of varying complexity, we develop an explainable AI framework for Earth system emulators. It bridges the model hierarchy through a reconfigured latent data assimilation technique, uniquely suited to exploit the sparse output from the idealized models. The resulting bridging model inherits the high resolution and comprehensive variables of operational models while achieving global accuracy enhancements through targeted improvements from idealized models. Crucially, the mechanism of AI provides a clear rationale for these advancements, moving beyond black-box correction to physically insightful understanding in a computationally efficient framework that enables effective physics-assisted digital twins and uncertainty quantification. We demonstrate its power by significantly correcting biases in CMIP6 simulations of El Niño spatiotemporal patterns, leveraging statistically accurate idealized models. This work also highlights the importance of pushing idealized model development and advancing communication between modeling communities.


Data-Driven Discovery and Formulation Refines the Quasi-Steady Model of Flapping-Wing Aerodynamics

Kamimizu, Yu, Liu, Hao, Nakata, Toshiyuki

arXiv.org Artificial Intelligence

Insects control unsteady aerodynamic forces on flapping wings to navigate complex environments. While understanding these forces is vital for biology, physics, and engineering, existing evaluation methods face trade-offs: high-fidelity simulations are computationally or experimentally expensive and lack explanatory power, whereas theoretical models based on quasi-steady assumptions offer insights but exhibit low accuracy. To overcome these limitations and thus enhance the accuracy of quasi-steady aerodynamic models, we applied a data-driven approach involving discovery and formulation of previously overlooked critical mechanisms. Through selection from 5,000 candidate kinematic functions, we identified mathematical expressions for three key additional mechanisms -- the effect of advance ratio, effect of spanwise kinematic velocity, and rotational Wagner effect -- which had been qualitatively recognized but were not formulated. Incorporating these mechanisms considerably reduced the prediction errors of the quasi-steady model using the computational fluid dynamics results as the ground truth, both in hawkmoth forward flight (at high Reynolds numbers) and fruit fly maneuvers (at low Reynolds numbers). The data-driven quasi-steady model enables rapid aerodynamic analysis, serving as a practical tool for understanding evolutionary adaptations in insect flight and developing bio-inspired flying robots.


Major Japan newspaper sues 'free-riding' AI firm Perplexity

The Japan Times

Japan's Yomiuri Shimbun newspaper, one of the world's biggest by circulation, is suing U.S.-based AI firm Perplexity for allegedly "free-riding" on its content on its search engine. The lawsuit filed Thursday is one of a slew by media companies worldwide against AI firms using their material and is the first by a major Japanese news organization, Yomiuri said. It accuses Perplexity of "free-riding on the results of the activities of news organizations, which have invested a great deal of effort and expense." A spokesman for the paper added that this "could have a negative impact on accurate journalism ... and shake the foundations of democracy." The lawsuit filed in Tokyo seeks damages of 2.2 billion ( 14.7 million), equivalent to 120,000 Yomuiri articles used "without permission" between February and June.


Images of AI – between fiction and function

AIHub

In this blog post, Dominik Vrabič Dežman provides a summary of his recent research article, 'Promising the future, encoding the past: AI hype and public media imagery'. Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse. The full paper is published in the AI and Ethics Journal's special edition on'The Ethical Implications of AI Hype, a collection edited by We and AI. AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots.


From Models To Experiments: Shallow Recurrent Decoder Networks on the DYNASTY Experimental Facility

Introini, Carolina, Riva, Stefano, Kutz, J. Nathan, Cammi, Antonio

arXiv.org Artificial Intelligence

The Shallow Recurrent Decoder networks are a novel paradigm recently introduced for state estimation, combining sparse observations with high-dimensional model data. This architecture features important advantages compared to standard data-driven methods including: the ability to use only three sensors (even randomly selected) for reconstructing the entire dynamics of a physical system; the ability to train on compressed data spanned by a reduced basis; the ability to measure a single field variable (easy to measure) and reconstruct coupled spatio-temporal fields that are not observable and minimal hyper-parameter tuning. This approach has been verified on different test cases within different fields including nuclear reactors, even though an application to a real experimental facility, adopting the employment of in-situ observed quantities, is missing. This work aims to fill this gap by applying the Shallow Recurrent Decoder architecture to the DYNASTY facility, built at Politecnico di Milano, which studies the natural circulation established by internally heated fluids for Generation IV applications, especially in the case of Circulating Fuel reactors. The RELAP5 code is used to generate the high-fidelity data, and temperature measurements extracted by the facility are used as input for the state estimation. The results of this work will provide a validation of the Shallow Recurrent Decoder architecture to engineering systems, showing the capabilities of this approach to provide and accurate state estimation.


AI Agent for Education: von Neumann Multi-Agent System Framework

Jiang, Yuan-Hao, Li, Ruijia, Zhou, Yizhou, Qi, Changyong, Hu, Hanglei, Wei, Yuang, Jiang, Bo, Wu, Yonghe

arXiv.org Artificial Intelligence

The development of large language models has ushered in new paradigms for education. This paper centers on the multi-Agent system in education and proposes the von Neumann multi-Agent system framework. It breaks down each AI Agent into four modules: control unit, logic unit, storage unit, and input-output devices, defining four types of operations: task deconstruction, self-reflection, memory processing, and tool invocation. Furthermore, it introduces related technologies such as Chain-of-Thought, Reson+Act, and Multi-Agent Debate associated with these four types of operations. The paper also discusses the ability enhancement cycle of a multi-Agent system for education, including the outer circulation for human learners to promote knowledge construction and the inner circulation for LLM-based-Agents to enhance swarm intelligence. Through collaboration and reflection, the multi-Agent system can better facilitate human learners' learning and enhance their teaching abilities in this process.


A Fast AI Surrogate for Coastal Ocean Circulation Models

Xu, Zelin, Ren, Jie, Zhang, Yupu, Ondina, Jose Maria Gonzalez, Olabarrieta, Maitane, Xiao, Tingsong, He, Wenchong, Liu, Zibo, Chen, Shigang, Smith, Kaleb, Jiang, Zhe

arXiv.org Artificial Intelligence

Nearly 900 million people live in low-lying coastal zones around the world and bear the brunt of impacts from more frequent and severe hurricanes and storm surges. Oceanographers simulate ocean current circulation along the coasts to develop early warning systems that save lives and prevent loss and damage to property from coastal hazards. Traditionally, such simulations are conducted using coastal ocean circulation models such as the Regional Ocean Modeling System (ROMS), which usually runs on an HPC cluster with multiple CPU cores. However, the process is time-consuming and energy expensive. While coarse-grained ROMS simulations offer faster alternatives, they sacrifice detail and accuracy, particularly in complex coastal environments. Recent advances in deep learning and GPU architecture have enabled the development of faster AI (neural network) surrogates. This paper introduces an AI surrogate based on a 4D Swin Transformer to simulate coastal tidal wave propagation in an estuary for both hindcast and forecast (up to 12 days). Our approach not only accelerates simulations but also incorporates a physics-based constraint to detect and correct inaccurate results, ensuring reliability while minimizing manual intervention. We develop a fully GPU-accelerated workflow, optimizing the model training and inference pipeline on NVIDIA DGX-2 A100 GPUs. Our experiments demonstrate that our AI surrogate reduces the time cost of 12-day forecasting of traditional ROMS simulations from 9,908 seconds (on 512 CPU cores) to 22 seconds (on one A100 GPU), achieving over 450$\times$ speedup while maintaining high-quality simulation results. This work contributes to oceanographic modeling by offering a fast, accurate, and physically consistent alternative to traditional simulation models, particularly for real-time forecasting in rapid disaster response.


Chemical Power Variability among Microscopic Robots in Blood Vessels

Hogg, Tad

arXiv.org Artificial Intelligence

Fuel cells using oxygen and glucose could power microscopic robots operating in blood vessels. Swarms of such robots can significantly reduce oxygen concentration, depending on the time between successive transits of the lung, hematocrit variation in vessels and tissue oxygen consumption. These factors differ among circulation paths through the body. This paper evaluates how these variations affect the minimum oxygen concentration due to robot consumption and where it occurs: mainly in moderate-sized veins toward the end of long paths prior to their merging with veins from shorter paths. This shows that tens of billions of robots can obtain hundreds of picowatts throughout the body with minor reduction in total oxygen. However, a trillion robots significantly deplete oxygen in some parts of the body. By storing oxygen or limiting their consumption in long circulation paths, robots can actively mitigate this depletion. The variation in behavior is illustrated in three cases: the portal system which involves passage through two capillary networks, the spleen whose slits significantly slow some of the flow, and large tissue consumption in coronary circulation.


Treasury denies 1p and 2p coins are to be scrapped

BBC News

The Treasury has denied that copper coins are to be phased out after it ordered no new 1p and 2p pieces from the Royal Mint this year. "We are not scrapping 1p or 2p coins," a Treasury spokesperson told the BBC. They added that the lack of orders was due to there being enough coins already in circulation. The comments came after multiple reports suggested that the coins might be scrapped as the number of purchases involving cash continued to fall. "We are confident there are enough coins in the system without the need to order more this year," the Treasury said.