Goto

Collaborating Authors

 jellyfish


Giant phantom jellyfish spotted deep in Pacific

Popular Science

These rare sea creatures live where the sun don't shine. Breakthroughs, discoveries, and DIY tips sent every weekday. Like a scene out of a Jules Verne novel, scientists from Schmidt Ocean Institute recently encountered a giant phantom jelly (). The enormous deep-sea jellyfish was spotted about 830 feet below the surface of the Pacific Ocean by a Remotely Operated Vehicle (ROV) exploring the Colorado-Rawson submarine canyon wall off the coast of Argentina. ROV pilots filmed this giant phantom jelly, or Stygiomedusa gigantea, at 253 meters during an ROV descent to explore the Colorado-Rawson submarine canyon wall.


Rare, deep-sea encounter: California scientists observe 'extraordinary' seven-arm octopus

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Rare, deep-sea encounter: California scientists observe'extraordinary' seven-arm octopus On November 6, 2025, MBARI Senior Scientist Steven Haddock and researchers in MBARI's Biodiversity and Biooptics Team observed a seven-arm octopus (Haliphron atlanticus) during an expedition in Monterey Bay with MBARI's remotely operated vehicle at a depth of approximately 700 meters. This is read by an automated voice. Please report any issues or inconsistencies here . California scientists captured rare footage of a seven-arm octopus eating a jellyfish.


Portuguese Man O'War species honors 'One-Eyed Dragon' samurai

Popular Science

The newly discovered P. mikazuki is a tribute the famous warrior Date Masamune. Breakthroughs, discoveries, and DIY tips sent every weekday. A team of university students in Japan identified an entirely new species of the mighty Portuguese Man O'War . Described in a study recently published in the journal, the creature's distinct features and fearsome venom have earned it a name that honors a famous 16th century samurai warrior. It's easy to mistake the Portuguese Man O'War () for a jellyfish .




OceanChat: The Effect of Virtual Conversational AI Agents on Sustainable Attitude and Behavior Change

Pataranutaporn, Pat, Doudkin, Alexander, Maes, Pattie

arXiv.org Artificial Intelligence

Marine ecosystems face unprecedented threats from climate change and plastic pollution, yet traditional environmental education often struggles to translate awareness into sustained behavioral change. This paper presents OceanChat, an interactive system leveraging large language models to create conversational AI agents represented as animated marine creatures -- specifically a beluga whale, a jellyfish, and a seahorse -- designed to promote environmental behavior (PEB) and foster awareness through personalized dialogue. Through a between-subjects experiment (N=900), we compared three conditions: (1) Static Scientific Information, providing conventional environmental education through text and images; (2) Static Character Narrative, featuring first-person storytelling from 3D-rendered marine creatures; and (3) Conversational Character Narrative, enabling real-time dialogue with AI-powered marine characters. Our analysis revealed that the Conversational Character Narrative condition significantly increased behavioral intentions and sustainable choice preferences compared to static approaches. The beluga whale character demonstrated consistently stronger emotional engagement across multiple measures, including perceived anthropomorphism and empathy. However, impacts on deeper measures like climate policy support and psychological distance were limited, highlighting the complexity of shifting entrenched beliefs. Our work extends research on sustainability interfaces facilitating PEB and offers design principles for creating emotionally resonant, context-aware AI characters. By balancing anthropomorphism with species authenticity, OceanChat demonstrates how interactive narratives can bridge the gap between environmental knowledge and real-world behavior change.


Re-Attentional Controllable Video Diffusion Editing

Wang, Yuanzhi, Li, Yong, Liu, Mengyi, Zhang, Xiaoya, Liu, Xin, Cui, Zhen, Chan, Antoni B.

arXiv.org Artificial Intelligence

Editing videos with textual guidance has garnered popularity due to its streamlined process which mandates users to solely edit the text prompt corresponding to the source video. Recent studies have explored and exploited large-scale text-to-image diffusion models for text-guided video editing, resulting in remarkable video editing capabilities. However, they may still suffer from some limitations such as mislocated objects, incorrect number of objects. Therefore, the controllability of video editing remains a formidable challenge. In this paper, we aim to challenge the above limitations by proposing a Re-Attentional Controllable Video Diffusion Editing (ReAtCo) method. Specially, to align the spatial placement of the target objects with the edited text prompt in a training-free manner, we propose a Re-Attentional Diffusion (RAD) to refocus the cross-attention activation responses between the edited text prompt and the target video during the denoising stage, resulting in a spatially location-aligned and semantically high-fidelity manipulated video. In particular, to faithfully preserve the invariant region content with less border artifacts, we propose an Invariant Region-guided Joint Sampling (IRJS) strategy to mitigate the intrinsic sampling errors w.r.t the invariant regions at each denoising timestep and constrain the generated content to be harmonized with the invariant region content. Experimental results verify that ReAtCo consistently improves the controllability of video diffusion editing and achieves superior video editing performance.


xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs

Ryoo, Michael S., Zhou, Honglu, Kendre, Shrikant, Qin, Can, Xue, Le, Shu, Manli, Savarese, Silvio, Xu, Ran, Xiong, Caiming, Niebles, Juan Carlos

arXiv.org Artificial Intelligence

We present xGen-MM-Vid (BLIP-3-Video): a multimodal language model for videos, particularly designed to efficiently capture temporal information over multiple frames. BLIP-3-Video takes advantage of the 'temporal encoder' in addition to the conventional visual tokenizer, which maps a sequence of tokens over multiple frames into a compact set of visual tokens. This enables BLIP3-Video to use much fewer visual tokens than its competing models (e.g., 32 vs. 4608 tokens). We explore different types of temporal encoders, including learnable spatio-temporal pooling as well as sequential models like Token Turing Machines. We experimentally confirm that BLIP-3-Video obtains video question-answering accuracies comparable to much larger state-of-the-art models (e.g., 34B), while being much smaller (i.e., 4B) and more efficient by using fewer visual tokens. The project website is at https://www.salesforceairesearch.com/opensource/xGen-MM-Vid/index.html


A Jellyfish Cyborg: Exploiting Natural Embodied Intelligence as Soft Robots

Owaki, Dai, Austin, Max, Ikeda, Shuhei, Okuizumi, Kazuya, Nakajima, Kohei

arXiv.org Artificial Intelligence

In the advanced field of bio-inspired robotics, the emergence of cyborgs represents the successful integration of engineering and biological systems. Building on previous research that showed how electrical stimuli could initiate and speed up a jellyfish's movement, this study presents a groundbreaking approach that explores how the natural embodied intelligence of the animal can be harnessed to address pivotal challenges such as spontaneous exploration, navigation in various environments, control of whole-body motion, and real-time predictions of behavior. We have developed a comprehensive data acquisition system and a unique setup for stimulating jellyfish, allowing for a detailed study of their movements. Through careful analysis of both spontaneous behaviors and behaviors induced by targeted stimulation, we have identified subtle differences between natural and induced motion patterns. By using a machine learning method called physical reservoir computing, we have successfully shown that future behaviors can be accurately predicted by directly measuring the jellyfish's body shape when the stimuli align with the animal's natural dynamics. Our findings also reveal significant advancements in motion control and real-time prediction capabilities of jellyfish cyborgs. In summary, this research provides a comprehensive roadmap for optimizing the capabilities of jellyfish cyborgs, with potential implications in marine reconnaissance and sustainable ecological interventions.


A Generative Approach to Control Complex Physical Systems

Wei, Long, Hu, Peiyan, Feng, Ruiqi, Feng, Haodong, Du, Yixuan, Zhang, Tao, Wang, Rui, Wang, Yue, Ma, Zhi-Ming, Wu, Tailin

arXiv.org Artificial Intelligence

Controlling the evolution of complex physical systems is a fundamental task across science and engineering. Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and identify near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method in 1D Burgers' equation and 2D jellyfish movement control in a fluid environment. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics.