airsim
A Pseudocode for Unadversarial Example Generation Algorithm 1: Unadversarial patch generation Input: Pre-trained classifier with parameters w, loss function null
B.1 Overview of AirSim We conduct our simulation experiments using the high fidelity simulator, Microsoft AirSim. AirSim acts as a plugin to Unreal Engine, which is a AAA videogame engine providing access to high fidelity graphics features such as high resolution textures, realistic lighting, soft shadows etc. making it a good choice for rendering for computer vision applications. AirSim internally provides physics models for a quadrotor vehicle, which we leverage for performing autonomous drone landing. As a plugin, AirSim can be paired with any Unreal Engine environmnent to simulate autonomous vehicles that can be programmed with an API both in terms of planning/control as well as obtaining camera images. AirSim also allows for controlling environmental features such as time of day, dynamically adding/removing objects, changing object textures and so on.
A Step-by-Step Guide to Creating a Robust Autonomous Drone Testing Pipeline
Jiang, Yupeng, Deng, Yao, Schroder, Sebastian, Liang, Linfeng, Gambhir, Suhaas, James, Alice, Seth, Avishkar, Pirrie, James, Zhang, Yihao, Zheng, Xi
Autonomous drones are rapidly reshaping industries ranging from aerial delivery and infrastructure inspection to environmental monitoring and disaster response. Ensuring the safety, reliability, and efficiency of these systems is paramount as they transition from research prototypes to mission-critical platforms. This paper presents a step-by-step guide to establishing a robust autonomous drone testing pipeline, covering each critical stage: Software-in-the-Loop (SIL) Simulation Testing, Hardware-in-the-Loop (HIL) Testing, Controlled Real-World Testing, and In-Field Testing. Using practical examples, including the marker-based autonomous landing system, we demonstrate how to systematically verify drone system behaviors, identify integration issues, and optimize performance. Furthermore, we highlight emerging trends shaping the future of drone testing, including the integration of Neurosymbolic and LLMs, creating co-simulation environments, and Digital Twin-enabled simulation-based testing techniques. By following this pipeline, developers and researchers can achieve comprehensive validation, minimize deployment risks, and prepare autonomous drones for safe and reliable real-world operations.
- North America > United States > California (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Africa > Rwanda (0.04)
- (5 more...)
- Workflow (1.00)
- Instructional Material > Training Manual (0.61)
- Transportation > Air (1.00)
- Media (1.00)
- Aerospace & Defense > Aircraft (1.00)
- (4 more...)
Multiple Distribution Shift -- Aerial (MDS-A): A Dataset for Test-Time Error Detection and Model Adaptation
Ngu, Noel, Taparia, Aditya, Simari, Gerardo I., Leiva, Mario, Corcoran, Jack, Senanayake, Ransalu, Shakarian, Paulo, Bastian, Nathaniel D.
Machine learning models assume that training and test samples are drawn from the same distribution. As such, significant differences between training and test distributions often lead to degradations in performance. We introduce Multiple Distribution Shift -- Aerial (MDS-A) -- a collection of inter-related datasets of the same aerial domain that are perturbed in different ways to better characterize the effects of out-of-distribution performance. Specifically, MDS-A is a set of simulated aerial datasets collected under different weather conditions. We include six datasets under different simulated weather conditions along with six baseline object-detection models, as well as several test datasets that are a mix of weather conditions that we show have significant differences from the training data. In this paper, we present characterizations of MDS-A, provide performance results for the baseline machine learning models (on both their specific training datasets and the test data), as well as results of the baselines after employing recent knowledge-engineering error-detection techniques (EDR) thought to improve out-of-distribution performance. The dataset is available at https://lab-v2.github.io/mdsa-dataset-website.
- South America > Argentina (0.04)
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
DroneWiS: Automated Simulation Testing of small Unmanned Aerial Systems in Realistic Windy Conditions
The continuous evolution of small Unmanned Aerial Systems (sUAS) demands advanced testing methodologies to ensure their safe and reliable operations in the real-world. To push the boundaries of sUAS simulation testing in realistic environments, we previously developed the DroneReqValidator (DRV) platform, allowing developers to automatically conduct simulation testing in digital twin of earth. In this paper, we present DRV 2.0, which introduces a novel component called DroneWiS (Drone Wind Simulation). DroneWiS allows sUAS developers to automatically simulate realistic windy conditions and test the resilience of sUAS against wind. Unlike current state-of-the-art simulation tools such as Gazebo and AirSim that only simulate basic wind conditions, DroneWiS leverages Computational Fluid Dynamics (CFD) to compute the unique wind flows caused by the interaction of wind with the objects in the environment such as buildings and uneven terrains. This simulation capability provides deeper insights to developers about the navigation capability of sUAS in challenging and realistic windy conditions. DroneWiS equips sUAS developers with a powerful tool to test, debug, and improve the reliability and safety of sUAS in real-world. A working demonstration is available at https://youtu.be/khBHEBST8Wc
- North America > United States > California (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Transportation > Infrastructure & Services (0.72)
- Transportation > Air (0.62)
Quantifying the Sim2real Gap for GPS and IMU Sensors
Mahajan, Ishaan, Unjhawala, Huzaifa, Zhang, Harry, Zhou, Zhenhao, Young, Aaron, Ruiz, Alexis, Caldararu, Stefan, Batagoda, Nevindu, Ashokkumar, Sriram, Negrut, Dan
Simulation can and should play a critical role in the development and testing of algorithms for autonomous agents. What might reduce its impact is the ``sim2real'' gap -- the algorithm response differs between operation in simulated versus real-world environments. This paper introduces an approach to evaluate this gap, focusing on the accuracy of sensor simulation -- specifically IMU and GPS -- in velocity estimation tasks for autonomous agents. Using a scaled autonomous vehicle, we conduct 40 real-world experiments across diverse environments then replicate the experiments in simulation with five distinct sensor noise models. We note that direct comparison of raw simulation and real sensor data fails to quantify the sim2real gap for robotics applications. We demonstrate that by using a state of the art state-estimation package as a ``judge'', and by evaluating the performance of this state-estimator in both real and simulated scenarios, we can isolate the sim2real discrepancies stemming from sensor simulations alone. The dataset generated is open-source and publicly available for unfettered use.
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Asia > Japan > Honshū > Tōhoku > Miyagi Prefecture > Sendai (0.04)
- Research Report (0.50)
- Overview (0.46)
A Digital Smart City for Emerging Mobility Systems
Zayas, Raymond M., Beaver, Logan E., Chalaki, Behdad, Bang, Heeseung, Malikopoulos, Andreas A.
The increasing demand for emerging mobility systems with connected and automated vehicles has imposed the necessity for quality testing environments to support their development. In this paper, we introduce a Unity-based virtual simulation environment for emerging mobility systems, called the Information and Decision Science Lab's Scaled Smart Digital City (IDS 3D City), intended to operate alongside its physical peer and its established control framework. By utilizing the Robot Operation System, AirSim, and Unity, we constructed a simulation environment capable of iteratively designing experiments significantly faster than it is possible in a physical testbed. This environment provides an intermediate step to validate the effectiveness of our control algorithms prior to their implementation in the physical testbed. The IDS 3D City also enables us to demonstrate that our control algorithms work independently of the underlying vehicle dynamics, as the vehicle dynamics introduced by AirSim operate at a different scale than our scaled smart city. Finally, we demonstrate the behavior of our digital environment by performing an experiment in both the virtual and physical environments and comparing their outputs.
- North America > United States > Delaware > New Castle County > Newark (0.14)
- North America > United States > Texas (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- North America > United States > Massachusetts > Norfolk County > Brookline (0.04)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (0.95)
Parallel Reinforcement Learning Simulation for Visual Quadrotor Navigation
Saunders, Jack, Saeedi, Sajad, Li, Wenbin
Reinforcement learning (RL) is an agent-based approach for teaching robots to navigate within the physical world. Gathering data for RL is known to be a laborious task, and real-world experiments can be risky. Simulators facilitate the collection of training data in a quicker and more cost-effective manner. However, RL frequently requires a significant number of simulation steps for an agent to become skilful at simple tasks. This is a prevalent issue within the field of RL-based visual quadrotor navigation where state dimensions are typically very large and dynamic models are complex. Furthermore, rendering images and obtaining physical properties of the agent can be computationally expensive. To solve this, we present a simulation framework, built on AirSim, which provides efficient parallel training. Building on this framework, Ape-X is modified to incorporate decentralised training of AirSim environments to make use of numerous networked computers. Through experiments we were able to achieve a reduction in training time from 3.9 hours to 11 minutes using the aforementioned framework and a total of 74 agents and two networked computers. Further details including a github repo and videos about our project, PRL4AirSim, can be found at https://sites.google.com/view/prl4airsim/home
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Czechia > Prague (0.04)
- Europe > United Kingdom > Scotland > City of Glasgow > Glasgow (0.04)
- (2 more...)
- Information Technology (0.95)
- Transportation > Air (0.68)
Microsoft helps speed up work on AI for autonomous drones and flying taxis
If autonomous drones and flying taxis are going to thrive, they'll need AI that can handle a wide range of conditions -- and Microsoft thinks it can help build that AI. The company has unveiled a Project AirSim platform that helps manufacturers create, train and test the algorithms guiding autonomous aircraft. The Azure-based technology has virtual vehicles fly millions of flights through detailed simulations in a matter of seconds, gauging their ability to handle different obstacles and weather conditions. A drone maker can quickly find out if their machine will avoid birds, or use too much battery power countering strong winds. Developers can use trained AI "building blocks" to get started, so they won't need vast amounts of technical know-how.
- Transportation (0.63)
- Consumer Products & Services > Travel (0.63)
Microsoft researchers train AI in simulation to control a real-world drone
In a preprint paper, Microsoft researchers describe a machine learning system that reasons out the correct actions to take directly from camera images. It's trained via simulation and learns to independently navigate environments and conditions in the real world, including unseen situations, which makes it a fit for robots deployed in search and rescue missions. Someday, it could help those robots more quickly identify people in need of help. "We wanted to push current technology to get closer to a human's ability to interpret environmental cues, adapt to difficult conditions and operate autonomously," wrote the researchers in a blog post published this week. "We were interested in exploring the question of what it would take to build autonomous systems that achieve similar performance levels."