Plotting

 Serban, Radu


ChronoLLM: A Framework for Customizing Large Language Model for Digital Twins generalization based on PyChrono

arXiv.org Artificial Intelligence

Project Chrono [1] is an open-source, physics-based simulation framework that supports the modeling, simulation, and analysis of complex systems. It is designed for high-performance, high-fidelity simulations and is widely used in research and industry. PyChrono [2] is the Python wrapper for Project Chrono, providing a user-friendly interface to the core functionalities of Project Chrono. It allows users to leverage the power of Project Chrono using Python, making it accessible to a broader range of users who prefer scripting in Python over C++. Project Chrono encompasses a wide range of features, and PyChrono inherits a subset of these capabilities: 1. Chrono::Engine: Provides core functionality for multibody dynamics and nonlinear finite element analysis, with robust treatment of friction and contact using both the penalty method and the Lagrange-multiplier method.


A physics-based sensor simulation environment for lunar ground operations

arXiv.org Artificial Intelligence

This contribution reports on a software framework that uses physically-based rendering to simulate camera operation in lunar conditions. The focus is on generating synthetic images qualitatively similar to those produced by an actual camera operating on a vehicle traversing and/or actively interacting with lunar terrain, e.g., for construction operations. The highlights of this simulator are its ability to capture (i) light transport in lunar conditions and (ii) artifacts related to the vehicle-terrain interaction, which might include dust formation and transport. The simulation infrastructure is built within an in-house developed physics engine called Chrono, which simulates the dynamics of the deformable terrain-vehicle interaction, as well as fallout of this interaction. The Chrono::Sensor camera model draws on ray tracing and Hapke Photometric Functions. We analyze the performance of the simulator using two virtual experiments featuring digital twins of NASA's VIPER rover navigating a lunar environment, and of the NASA's RASSOR excavator engaged into a digging operation. The sensor simulation solution presented can be used for the design and testing of perception algorithms, or as a component of in-silico experiments that pertain to large lunar operations, e.g., traversability, construction tasks.


Using a Bayesian-Inference Approach to Calibrating Models for Simulation in Robotics

arXiv.org Artificial Intelligence

In robotics, simulation has the potential to reduce design time and costs, and lead to a more robust engineered solution and a safer development process. However, the use of simulators is predicated on the availability of good models. This contribution is concerned with improving the quality of these models via calibration, which is cast herein in a Bayesian framework. First, we discuss the Bayesian machinery involved in model calibration. Then, we demonstrate it in one example: calibration of a vehicle dynamics model that has low degree of freedom count and can be used for state estimation, model predictive control, or path planning. A high fidelity simulator is used to emulate the ``experiments'' and generate the data for the calibration. The merit of this work is not tied to a new Bayesian methodology for calibration, but to the demonstration of how the Bayesian machinery can establish connections among models in computational dynamics, even when the data in use is noisy. The software used to generate the results reported herein is available in a public repository for unfettered use and distribution.


Camera simulation for robot simulation: how important are various camera model components?

arXiv.org Artificial Intelligence

Modeling cameras for the simulation of autonomous robotics is critical for generating synthetic images with appropriate realism to effectively evaluate a perception algorithm in simulation. In many cases though, simulated images are produced by traditional rendering techniques that exclude or superficially handle processing steps and aspects encountered in the actual camera pipeline. The purpose of this contribution is to quantify the degree to which the exclusion from the camera model of various image generation steps or aspects affect the sim-to-real gap in robotics. We investigate what happens if one ignores aspects tied to processes from within the physical camera, e.g., lens distortion, noise, and signal processing; scene effects, e.g., lighting and reflection; and rendering quality. The results of the study demonstrate, quantitatively, that large-scale changes to color, scene, and location have far greater impact than model aspects concerned with local, feature-level artifacts. Moreover, we show that these scene-level aspects can stem from lens distortion and signal processing, particularly when considering white-balance and auto-exposure modeling.


A performance contextualization approach to validating camera models for robot simulation

arXiv.org Artificial Intelligence

The focus of this contribution is on camera simulation as it comes into play in simulating autonomous robots for their virtual prototyping. We propose a camera model validation methodology based on the performance of a perception algorithm and the context in which the performance is measured. This approach is different than traditional validation of synthetic images, which is often done at a pixel or feature level, and tends to require matching pairs of synthetic and real images. Due to the high cost and constraints of acquiring paired images, the proposed approach is based on datasets that are not necessarily paired. Within a real and a simulated dataset, A and B, respectively, we find subsets Ac and Bc of similar content and judge, statistically, the perception algorithm's response to these similar subsets. This validation approach obtains a statistical measure of performance similarity, as well as a measure of similarity between the content of A and B. The methodology is demonstrated using images generated with Chrono::Sensor and a scaled autonomous vehicle, using an object detector as the perception algorithm. The results demonstrate the ability to quantify (i) differences between simulated and real data; (ii) the propensity of training methods to mitigate the sim-to-real gap; and (iii) the context overlap between two datasets.