Goto

Collaborating Authors

 landslide


A Proofs

Neural Information Processing Systems

D.2 Countries Hyperparameters are summarized in table 6. We ran all experiments on a single CPU (Apple M2). 15 optimizer AdamW learning rate 0.0003 learning rate schedule cosine training epochs 100 weight decay 0.00001 batch size 4 embedding dimensions 10 embedding initialization one-hot, fixed neural networks LeNet5 max search depth / Table 5: Hyperparameters for the MNIST -addition experiments.






Using AI to speed up landslide detection

AIHub

On 3 April 2024, a magnitude 7.4 quake--Taiwan's strongest in 25 years--shook the country's eastern coast. Stringent building codes spared most structures, but mountainous and remote villages were devastated by landslides. When disasters affect large and inaccessible areas, responders often turn to satellite images to pinpoint affected areas and prioritise relief efforts. But mapping landslides from satellite imagery by eye can be time-intensive, said Lorenzo Nava, who is jointly based at Cambridge's Departments of Earth Sciences and Geography. "In the aftermath of a disaster, time really matters," he said.

  Country:
  Industry: Government (0.34)

Multi-class Seismic Building Damage Assessment from InSAR Imagery using Quadratic Variational Causal Bayesian Inference

Li, Xuechun, Xu, Susu

arXiv.org Artificial Intelligence

Interferometric Synthetic Aperture Radar (InSAR) technology uses satellite radar to detect surface deformation patterns and monitor earthquake impacts on buildings. While vital for emergency response planning, extracting multi-class building damage classifications from InSAR data faces challenges: overlapping damage signatures with environmental noise, computational complexity in multi-class scenarios, and the need for rapid regional-scale processing. Our novel multi-class variational causal Bayesian inference framework with quadratic variational bounds provides rigorous approximations while ensuring efficiency. By integrating InSAR observations with USGS ground failure models and building fragility functions, our approach separates building damage signals while maintaining computational efficiency through strategic pruning. Evaluation across five major earthquakes (Haiti 2021, Puerto Rico 2020, Zagreb 2020, Italy 2016, Ridgecrest 2019) shows improved damage classification accuracy (AUC: 0.94-0.96), achieving up to 35.7% improvement over existing methods. Our approach maintains high accuracy (AUC > 0.93) across all damage categories while reducing computational overhead by over 40% without requiring extensive ground truth data.


Deep Self-Supervised Disturbance Mapping with the OPERA Sentinel-1 Radiometric Terrain Corrected SAR Backscatter Product

Hardiman-Mostow, Harris, Marshak, Charles, Handwerger, Alexander L.

arXiv.org Artificial Intelligence

Mapping land surface disturbances supports disaster response, resource and ecosystem management, and climate adaptation efforts. Synthetic aperture radar (SAR) is an invaluable tool for disturbance mapping, providing consistent time-series images of the ground regardless of weather or illumination conditions. Despite SAR's potential for disturbance mapping, processing SAR data to an analysis-ready format requires expertise and significant compute resources, particularly for large-scale global analysis. In October 2023, NASA's Observational Products for End-Users from Remote Sensing Analysis (OPERA) project released the near-global Radiometric Terrain Corrected SAR backscatter from Sentinel-1 (RTC-S1) dataset, providing publicly available, analysis-ready SAR imagery. In this work, we utilize this new dataset to systematically analyze land surface disturbances. As labeling SAR data is often prohibitively time-consuming, we train a self-supervised vision transformer - which requires no labels to train - on OPERA RTC-S1 data to estimate a per-pixel distribution from the set of baseline imagery and assess disturbances when there is significant deviation from the modeled distribution. To test our model's capability and generality, we evaluate three different natural disasters - which represent high-intensity, abrupt disturbances - from three different regions of the world. Across events, our approach yields high quality delineations: F1 scores exceeding 0.6 and Areas Under the Precision-Recall Curve exceeding 0.65, consistently outperforming existing SAR disturbance methods. Our findings suggest that a self-supervised vision transformer is well-suited for global disturbance mapping and can be a valuable tool for operational, near-global disturbance monitoring, particularly when labeled data does not exist.


First observations of the seiche that shook the world

Monahan, Thomas, Tang, Tianning, Roberts, Stephen, Adcock, Thomas A. A.

arXiv.org Artificial Intelligence

Extreme events are evolving as a direct consequence of climate change, leading to the emergence of new, previously unobserved phenomena [1, 2]. In remote regions like the Arctic, where in-situ measurements are sparse, scientists must increasingly depend on analytical and numerical models to explore these events. However, modeling in such regions presents significant challenges due to the uncertainties in the data required to calibrate and validate these models [3]. Consequently, large simplifications are often necessary, resulting in substantial discrepancies between observed and modeled phenomena. The mysterious 10.88 mHz very-long-period (VLP) seismic signal, which appeared following a tsunamigenic landslide in the Dickson Fjord, Greenland, on September 16th, 2023, and the subsequent interdisciplinary scientific efforts to determine its origin, underscore these challenges. Two independent studies [4, 5] have hypothesized that the signal was driven by a standing wave, or seiche, which formed in the aftermath of the tsunami. While it is well-documented that seiches can form in resonant enclosed and semi-enclosed basins [6], the loading-induced tilt they produce has only been observed locally (< 30 km) and for short durations (< 1 hour)[5, 7]. Moreover, no prior evidence exists of persistent fluid sloshing (lasting several days) without an external driver.


Towards physics-informed neural networks for landslide prediction

Dahal, Ashok, Lombardo, Luigi

arXiv.org Artificial Intelligence

For decades, solutions to regional scale landslide prediction have mostly relied on data-driven models, by definition, disconnected from the physics of the failure mechanism. The success and spread of such tools came from the ability to exploit proxy variables rather than explicit geotechnical ones, as the latter are prohibitive to acquire over broad landscapes. Our work implements a Physics Informed Neural Network (PINN) approach, thereby adding to a standard data-driven architecture, an intermediate constraint to solve for the permanent deformation typical of Newmark slope stability methods. This translates into a neural network tasked with explicitly retrieving geotechnical parameters from common proxy variables and then minimize a loss function with respect to the available coseismic landside inventory. The results are very promising, because our model not only produces excellent predictive performance in the form of standard susceptibility output, but in the process, also generates maps of the expected geotechnical properties at a regional scale. Such architecture is therefore framed to tackle coseismic landslide prediction, something that, if confirmed in other studies, could open up towards PINN-based near-real-time predictions.