Goto

Collaborating Authors

 plume


Atmospheric pollution caused by space junk could be a huge problem

New Scientist

After a Falcon 9 rocket stage burned up in the atmosphere, vaporised lithium and other metals drifted over Europe. A SpaceX rocket that burned up after re-entering the atmosphere unleashed a plume of vaporised metals over Europe, a type of pollution that is expected to increase as spacecraft and satellites multiply. The upper stage of a Falcon 9, which is designed to splash down in the Pacific Ocean for possible re-use, lost control due to engine failure and fell from orbit over the north Atlantic in February 2025. We're finally solving the puzzle of how clouds will affect our climate People across Europe saw fiery debris streaking through the sky, some of which crashed behind a warehouse in Poland. Seeing the news, Robin Wing at the Leibniz Institute of Atmospheric Physics in Germany and his colleagues turned on their lidar, an instrument for atmospheric sensing.


Using machine learning to track greenhouse gas emissions

AIHub

"We really can't do this research without collaboration." Wąsala collaborates with atmospheric scientists from SRON (Space Research Organisation Netherlands) on machine learning models that detect large greenhouse gas emissions from space. There is too much data to review manually, and such models offer a solution. How much greenhouse gas do humans emit? The machine learning method Wąsala refers to detects emissions in the form of a point source: plumes.


PLUME: Procedural Layer Underground Modeling Engine

Garcia, Gabriel Manuel, Richard, Antoine, Olivares-Mendez, Miguel

arXiv.org Artificial Intelligence

-- As space exploration advances, underground environments are becoming increasingly attractive due to their potential to provide shelter, easier access to resources, and enhanced scientific opportunities. Although such environments exist on Earth, they are often not easily accessible and do not accurately represent the diversity of underground environments found throughout the solar system. This paper presents PLUME, a procedural generation framework aimed at easily creating 3D underground environments. Its flexible structure allows for the continuous enhancement of various underground features, aligning with our expanding understanding of the solar system. The environments generated using PLUME can be used for AI training, evaluating robotics algorithms, 3D rendering, and facilitating rapid iteration on developed exploration algorithms. In this paper, it is demonstrated that PLUME has been used along with a robotic simulator . PLUME is open source and has been released on Github. To do planetary exploration, shelter will be an essential part to keep robots and humans protected from extreme temperatures, solar radiation, and micrometeorites. Existing subsurface structures such as caves or lava tubes are considered of high interest to create shelter.


Real-Time Instrument Planning and Perception for Novel Measurements of Dynamic Phenomena

Zilberstein, Itai, Candela, Alberto, Chien, Steve

arXiv.org Artificial Intelligence

Advancements in onboard computing mean remote sensing agents can employ state-of-the-art computer vision and machine learning at the edge. These capabilities can be leveraged to unlock new rare, transient, and pinpoint measurements of dynamic science phenomena. In this paper, we present an automated workflow that synthesizes the detection of these dynamic events in look-ahead satellite imagery with autonomous trajectory planning for a follow-up high-resolution sensor to obtain pinpoint measurements. We apply this workflow to the use case of observing volcanic plumes. We analyze classification approaches including traditional machine learning algorithms and convolutional neural networks. We present several trajectory planning algorithms that track the morphological features of a plume and integrate these algorithms with the classifiers. We show through simulation an order of magnitude increase in the utility return of the high-resolution instrument compared to baselines while maintaining efficient runtimes.


Olfactory Inertial Odometry: Sensor Calibration and Drift Compensation

France, Kordel K., Daescu, Ovidiu, Paul, Anirban, Prasad, Shalini

arXiv.org Artificial Intelligence

Visual inertial odometry (VIO) is a process for fusing visual and kinematic data to understand a machine's state in a navigation task. Olfactory inertial odometry (OIO) is an analog to VIO that fuses signals from gas sensors with inertial data to help a robot navigate by scent. Gas dynamics and environmental factors introduce disturbances into olfactory navigation tasks that can make OIO difficult to facilitate. With our work here, we define a process for calibrating a robot for OIO that generalizes to several olfaction sensor types. Our focus is specifically on calibrating OIO for centimeter-level accuracy in localizing an odor source on a slow-moving robot platform to demonstrate use cases in robotic surgery and touchless security screening. We demonstrate our process for OIO calibration on a real robotic arm and show how this calibration improves performance over a cold-start olfactory navigation task.


Olfactory Inertial Odometry: Methodology for Effective Robot Navigation by Scent

France, Kordel K., Daescu, Ovidiu

arXiv.org Artificial Intelligence

--Olfactory navigation is one of the most primitive mechanisms of exploration used by organisms. Navigation by machine olfaction (artificial smell) is a very difficult task to both simulate and solve. With this work, we define olfactory inertial odometry (OIO), a framework for using inertial kinematics, and fast-sampling olfaction sensors to enable navigation by scent analogous to visual inertial odometry (VIO). We establish how principles from SLAM and VIO can be extrapolated to olfaction to enable real-world robotic tasks. We demonstrate OIO with three different odour localization algorithms on a real 5-DoF robot arm over an odour-tracking scenario that resembles real applications in agriculture and food quality control. Our results indicate success in establishing a baseline framework for OIO from which other research in olfactory navigation can build, and we note performance enhancements that can be made to address more complex tasks in the future. From the first life forms to complex mammals, the ability to navigate using scent has been a cornerstone of survival. Animals like ants, hounds, and rodents demonstrate remarkable proficiency in following odour plumes and pheromone trails to locate food, mates, or shelter. These feats are achieved through a sophisticated interplay between acute scent receptors and motion. However, the physical behavior of odour plumes--constantly shifting with wind, influenced by temperature and humidity, and weakening over time--presents a formidable challenge. When the odour source is out of sight, organisms rely entirely on olfactory cues, transforming the task into a complex control problem that demands robust uncertainty management.


Aligning LLMs by Predicting Preferences from User Writing Samples

Aroca-Ouellette, Stéphane, Mackraz, Natalie, Theobald, Barry-John, Metcalf, Katherine

arXiv.org Artificial Intelligence

Accommodating human preferences is essential for creating aligned LLM agents that deliver personalized and effective interactions. Recent work has shown the potential for LLMs acting as writing agents to infer a description of user preferences. Agent alignment then comes from conditioning on the inferred preference description. However, existing methods often produce generic preference descriptions that fail to capture the unique and individualized nature of human preferences. This paper introduces PROSE, a method designed to enhance the precision of preference descriptions inferred from user writing samples. PROSE incorporates two key elements: (1) iterative refinement of inferred preferences, and (2) verification of inferred preferences across multiple user writing samples. We evaluate PROSE with several LLMs (i.e., Qwen2.5 7B and 72B Instruct, GPT-mini, and GPT-4o) on a summarization and an email writing task. We find that PROSE more accurately infers nuanced human preferences, improving the quality of the writing agent's generations over CIPHER (a state-of-the-art method for inferring preferences) by 33\%. Lastly, we demonstrate that ICL and PROSE are complementary methods, and combining them provides up to a 9\% improvement over ICL alone.


3D Characterization of Smoke Plume Dispersion Using Multi-View Drone Swarm

Krishnakumar, Nikil, Sharma, Shashank, Pal, Srijan Kumar, Hong, Jiarong

arXiv.org Artificial Intelligence

This study presents an advanced multi-view drone swarm imaging system for the three-dimensional characterization of smoke plume dispersion dynamics. The system comprises a manager drone and four worker drones, each equipped with high-resolution cameras and precise GPS modules. The manager drone uses image feedback to autonomously detect and position itself above the plume, then commands the worker drones to orbit the area in a synchronized circular flight pattern, capturing multi-angle images. The camera poses of these images are first estimated, then the images are grouped in batches and processed using Neural Radiance Fields (NeRF) to generate high-resolution 3D reconstructions of plume dynamics over time. Field tests demonstrated the ability of the system to capture critical plume characteristics including volume dynamics, wind-driven directional shifts, and lofting behavior at a temporal resolution of about 1 s. The 3D reconstructions generated by this system provide unique field data for enhancing the predictive models of smoke plume dispersion and fire spread. Broadly, the drone swarm system offers a versatile platform for high resolution measurements of pollutant emissions and transport in wildfires, volcanic eruptions, prescribed burns, and industrial processes, ultimately supporting more effective fire control decisions and mitigating wildfire risks.


Autonomous Drone for Dynamic Smoke Plume Tracking

Pal, Srijan Kumar, Sharma, Shashank, Krishnakumar, Nikil, Hong, Jiarong

arXiv.org Artificial Intelligence

This paper presents a novel autonomous drone-based smoke plume tracking system capable of navigating and tracking plumes in highly unsteady atmospheric conditions. The system integrates advanced hardware and software and a comprehensive simulation environment to ensure robust performance in controlled and real-world settings. The quadrotor, equipped with a high-resolution imaging system and an advanced onboard computing unit, performs precise maneuvers while accurately detecting and tracking dynamic smoke plumes under fluctuating conditions. Our software implements a two-phase flight operation, i.e., descending into the smoke plume upon detection and continuously monitoring the smoke movement during in-plume tracking. Leveraging Proportional Integral-Derivative (PID) control and a Proximal Policy Optimization based Deep Reinforcement Learning (DRL) controller enables adaptation to plume dynamics. Unreal Engine simulation evaluates performance under various smoke-wind scenarios, from steady flow to complex, unsteady fluctuations, showing that while the PID controller performs adequately in simpler scenarios, the DRL-based controller excels in more challenging environments. Field tests corroborate these findings. This system opens new possibilities for drone-based monitoring in areas like wildfire management and air quality assessment. The successful integration of DRL for real-time decision-making advances autonomous drone control for dynamic environments.


Removing Geometric Bias in One-Class Anomaly Detection with Adaptive Feature Perturbation

Hermary, Romain, Gaudillière, Vincent, Shabayek, Abd El Rahman, Aouada, Djamila

arXiv.org Artificial Intelligence

One-class anomaly detection aims to detect objects that do not belong to a predefined normal class. In practice training data lack those anomalous samples; hence state-of-the-art methods are trained to discriminate between normal and synthetically-generated pseudo-anomalous data. Most methods use data augmentation techniques on normal images to simulate anomalies. However the best-performing ones implicitly leverage a geometric bias present in the benchmarking datasets. This limits their usability in more general conditions. Others are relying on basic noising schemes that may be suboptimal in capturing the underlying structure of normal data. In addition most still favour the image domain to generate pseudo-anomalies training models end-to-end from only the normal class and overlooking richer representations of the information. To overcome these limitations we consider frozen yet rich feature spaces given by pretrained models and create pseudo-anomalous features with a novel adaptive linear feature perturbation technique. It adapts the noise distribution to each sample applies decaying linear perturbations to feature vectors and further guides the classification process using a contrastive learning objective. Experimental evaluation conducted on both standard and geometric bias-free datasets demonstrates the superiority of our approach with respect to comparable baselines. The codebase is accessible via our public repository.