Goto

Collaborating Authors

 incidence angle


Noise Analysis and Modeling of the PMD Flexx2 Depth Camera for Robotic Applications

Cai, Yuke, Plozza, Davide, Marty, Steven, Joseph, Paul, Magno, Michele

arXiv.org Artificial Intelligence

Within this area, we fitted a plane to the data points for each set of conditions and then The characteristics of the measured axial noise are depicted calculated the standard deviation of depth values over 300 in Figure 6.


Toward Physics-Aware Deep Learning Architectures for LiDAR Intensity Simulation

Anand, Vivek, Lohani, Bharat, Pandey, Gaurav, Mishra, Rakesh

arXiv.org Artificial Intelligence

Autonomous vehicles (AVs) heavily rely on LiDAR perception for environment understanding and navigation. LiDAR intensity provides valuable information about the reflected laser signals and plays a crucial role in enhancing the perception capabilities of AVs. However, accurately simulating LiDAR intensity remains a challenge due to the unavailability of material properties of the objects in the environment, and complex interactions between the laser beam and the environment. The proposed method aims to improve the accuracy of intensity simulation by incorporating physics-based modalities within the deep learning framework. One of the key entities that captures the interaction between the laser beam and the objects is the angle of incidence. In this work we demonstrate that the addition of the LiDAR incidence angle as a separate input to the deep neural networks significantly enhances the results. We present a comparative study between two prominent deep learning architectures: U-NET a Convolutional Neural Network (CNN), and Pix2Pix a Generative Adversarial Network (GAN). We implemented these two architectures for the intensity prediction task and used SemanticKITTI and VoxelScape datasets for experiments. The comparative analysis reveals that both architectures benefit from the incidence angle as an additional input. Moreover, the Pix2Pix architecture outperforms U-NET, especially when the incidence angle is incorporated.


Sea ice detection using concurrent multispectral and synthetic aperture radar imagery

Rogers, Martin S J, Fox, Maria, Fleming, Andrew, van Zeeland, Louisa, Wilkinson, Jeremy, Hosking, J. Scott

arXiv.org Artificial Intelligence

Synthetic Aperture Radar (SAR) imagery is the primary data type used for sea ice mapping due to its spatio-temporal coverage and the ability to detect sea ice independent of cloud and lighting conditions. Automatic sea ice detection using SAR imagery remains problematic due to the presence of ambiguous signal and noise within the image. Conversely, ice and water are easily distinguishable using multispectral imagery (MSI), but in the polar regions the ocean's surface is often occluded by cloud or the sun may not appear above the horizon for many months. To address some of these limitations, this paper proposes a new tool trained using concurrent multispectral Visible and SAR imagery for sea Ice Detection (ViSual\_IceD). ViSual\_IceD is a convolution neural network (CNN) that builds on the classic U-Net architecture by containing two parallel encoder stages, enabling the fusion and concatenation of MSI and SAR imagery containing different spatial resolutions. The performance of ViSual\_IceD is compared with U-Net models trained using concatenated MSI and SAR imagery as well as models trained exclusively on MSI or SAR imagery. ViSual\_IceD outperforms the other networks, with a F1 score 1.60\% points higher than the next best network, and results indicate that ViSual\_IceD is selective in the image type it uses during image segmentation. Outputs from ViSual\_IceD are compared to sea ice concentration products derived from the AMSR2 Passive Microwave (PMW) sensor. Results highlight how ViSual\_IceD is a useful tool to use in conjunction with PMW data, particularly in coastal regions. As the spatial-temporal coverage of MSI and SAR imagery continues to increase, ViSual\_IceD provides a new opportunity for robust, accurate sea ice coverage detection in polar regions.


Physical LiDAR Simulation in Real-Time Engine

Jansen, Wouter, Huebel, Nico, Steckel, Jan

arXiv.org Artificial Intelligence

Designing and validating sensor applications and algorithms in simulation is an important step in the modern development process. Furthermore, modern open-source multi-sensor simulation frameworks are moving towards the usage of video-game engines such as the Unreal Engine. Simulation of a sensor such as a LiDAR can prove to be difficult in such real-time software. In this paper we present a GPU-accelerated simulation of LiDAR based on its physical properties and interaction with the environment. We provide a generation of the depth and intensity data based on the properties of the sensor as well as the surface material and incidence angle at which the light beams hit the surface. It is validated against a real LiDAR sensor and shown to be accurate and precise although highly depended on the spectral data used for the material properties.


SEN12TS -- Largest land cover classification dataset ?!

#artificialintelligence

Land cover classification (or semantic segmentation in the CV context), is one of the most important applications of machine / deep learning models in remote sensing image analysis. There are numerous benchmark datasets with different features, designed and published for LULC classification task. Although radar-derived and optical imagery are widely available at similar timescales and spatial resolutions, some issues make their combined processing more complicated. These issues include coregistration between satellite missions, processing of SAR imagery to correct for ground geometry and incidence angle; and the most important one, lack of reliable labeled ground truth pixels appropriate for research purposes. Here, I'm going to introduce SEN12TS; a very large satellite image dataset (1.69 TB in storage!), designed by University of Colombia and Descartes Lab, specifically for land cover classification.


Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks

Shukla, Khemraj, Di Leoni, Patricio Clark, Blackshire, James, Sparkman, Daniel, Karniadakis, George Em

arXiv.org Machine Learning

We introduce an optimized physics-informed neural network (PINN) trained to solve the problem of identifying and characterizing a surface breaking crack in a metal plate. PINNs are neural networks that can combine data and physics in the learning process by adding the residuals of a system of Partial Differential Equations to the loss function. Our PINN is supervised with realistic ultrasonic surface acoustic wave data acquired at a frequency of 5 MHz. The ultrasonic surface wave data is represented as a surface deformation on the top surface of a metal plate, measured by using the method of laser vibrometry. The PINN is physically informed by the acoustic wave equation and its convergence is sped up using adaptive activation functions. The adaptive activation function uses a scalable hyperparameter in the activation function, which is optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The usage of adaptive activation function significantly improves the convergence, notably observed in the current study. We use PINNs to estimate the speed of sound of the metal plate, which we do with an error of 1\%, and then, by allowing the speed of sound to be space dependent, we identify and characterize the crack as the positions where the speed of sound has decreased. Our study also shows the effect of sub-sampling of the data on the sensitivity of sound speed estimates. More broadly, the resulting model shows a promising deep neural network model for ill-posed inverse problems.