Goto

Collaborating Authors

 elevation



WildfireGenome: Interpretable Machine Learning Reveals Local Drivers of Wildfire Risk and Their Cross-County Variation

Liu, Chenyue, Mostafavi, Ali

arXiv.org Artificial Intelligence

Current wildfire risk assessments rely on coarse hazard maps and opaque machine learning models that optimize regional accuracy while sacrificing interpretability at the decision scale. WildfireGenome addresses these gaps through three components: (1) fusion of seven federal wildfire indicators into a sign-aligned, PCA-based composite risk label at H3 Level-8 resolution; (2) Random Forest classification of local wildfire risk; and (3) SHAP and ICE/PDP analyses to expose county-specific nonlinear driver relationships. Across seven ecologically diverse U.S. counties, models achieve accuracies of 0.755-0.878 and Quadratic Weighted Kappa up to 0.951, with principal components explaining 87-94% of indicator variance. Transfer tests show reliable performance between ecologically similar regions but collapse across dissimilar contexts. Explanations consistently highlight needleleaf forest cover and elevation as dominant drivers, with risk rising sharply at 30-40% needleleaf coverage. WildfireGenome advances wildfire risk assessment from regional prediction to interpretable, decision-scale analytics that guide vegetation management, zoning, and infrastructure planning.


Latent space analysis and generalization to out-of-distribution data

Rainey, Katie, Hausmann, Erin, Waagen, Donald, Gray, David, Hulsey, Donald

arXiv.org Machine Learning

Understanding the relationships between data points in the latent decision space derived by the deep learning system is critical to evaluating and interpreting the performance of the system on real world data. Detecting \textit{out-of-distribution} (OOD) data for deep learning systems continues to be an active research topic. We investigate the connection between latent space OOD detection and classification accuracy of the model. Using open source simulated and measured Synthetic Aperture RADAR (SAR) datasets, we empirically demonstrate that the OOD detection cannot be used as a proxy measure for model performance. We hope to inspire additional research into the geometric properties of the latent space that may yield future insights into deep learning robustness and generalizability.



Systematic Evaluation of Time-Frequency Features for Binaural Sound Source Localization

Panah, Davoud Shariat, Ragano, Alessandro, Barry, Dan, Skoglund, Jan, Hines, Andrew

arXiv.org Artificial Intelligence

ABSTRACT This study presents a systematic evaluation of time-frequency feature design for binaural sound source localization (SSL), focusing on how feature selection influences model performance across diverse conditions. We investigate the performance of a convolu-tional neural network (CNN) model using various combinations of amplitude-based features (magnitude spectrogram, interaural level difference - ILD) and phase-based features (phase spectrogram, interaural phase difference - IPD). Evaluations on in-domain and out-of-domain data with mismatched head-related transfer functions (HRTFs) reveal that carefully chosen feature combinations often outperform increases in model complexity. While two-feature sets such as ILD + IPD are sufficient for in-domain SSL, generalization to diverse content requires richer inputs combining channel spectrograms with both ILD and IPD. Using the optimal feature sets, our low-complexity CNN model achieves competitive performance. Our findings underscore the importance of feature design in binaural SSL and provide practical guidance for both domain-specific and general-purpose localization.


ForeSWE: Forecasting Snow-Water Equivalent with an Uncertainty-Aware Attention Model

Thapa, Krishu K, Savalkar, Supriya, Singh, Bhupinderjeet, Hoang, Trong Nghia, Rajagopalan, Kirti, Kalyanaraman, Ananth

arXiv.org Artificial Intelligence

Various complex water management decisions are made in snow-dominant watersheds with the knowledge of Snow-Water Equivalent (SWE) -- a key measure widely used to estimate the water content of a snowpack. However, forecasting SWE is challenging because SWE is influenced by various factors including topography and an array of environmental conditions, and has therefore been observed to be spatio-temporally variable. Classical approaches to SWE forecasting have not adequately utilized these spatial/temporal correlations, nor do they provide uncertainty estimates -- which can be of significant value to the decision maker. In this paper, we present ForeSWE, a new probabilistic spatio-temporal forecasting model that integrates deep learning and classical probabilistic techniques. The resulting model features a combination of an attention mechanism to integrate spatiotemporal features and interactions, alongside a Gaussian process module that provides principled quantification of prediction uncertainty. We evaluate the model on data from 512 Snow Telemetry (SNOTEL) stations in the Western US. The results show significant improvements in both forecasting accuracy and prediction interval compared to existing approaches. The results also serve to highlight the efficacy in uncertainty estimates between different approaches. Collectively, these findings have provided a platform for deployment and feedback by the water management community.


Towards Proprioceptive Terrain Mapping with Quadruped Robots for Exploration in Planetary Permanently Shadowed Regions

Sanchez-Delgado, Alberto, Soares, João Carlos Virgolino, Barasuol, Victor, Semini, Claudio

arXiv.org Artificial Intelligence

Abstract-- Permanently Shadowed Regions (PSRs) near the lunar poles are of interest for future exploration due to their potential to contain water ice and preserve geological records. Their complex, uneven terrain favors the use of legged robots, which can traverse challenging surfaces while collecting in-situ data, and have proven effective in Earth analogs, including dark caves, when equipped with onboard lighting. While exteroceptive sensors like cameras and lidars can capture terrain geometry and even semantic information, they cannot quantify its physical interaction with the robot--a capability provided by proprioceptive sensing. We propose a terrain mapping framework for quadruped robots, which estimates elevation, foot slippage, energy cost, and stability margins from internal sensing during locomotion. These metrics are incrementally integrated into a multi-layer 2.5D gridmap that reflects terrain interaction from the robot's perspective. The system is evaluated in a simulator that mimics a lunar environment, using the 21 kg quadruped robot Aliengo, showing consistent mapping performance under lunar gravity and terrain conditions. The global interest in lunar exploration has brought particular focus to the Moon's Permanently Shadowed Regions (PSRs), primarily located near the poles. These regions are of significant scientific and strategic interest due to the potential presence of water ice, a critical resource for long-duration missions, and their capacity to preserve geological records [1], [2], [3].


FLEX: A Largescale Multimodal, Multiview Dataset for Learning Structured Representations for Fitness Action Quality Assessment

Yin, Hao, Gu, Lijun, Parmar, Paritosh, Xu, Lin, Guo, Tianxiao, Fu, Weiwei, Zhang, Yang, Zheng, Tianyou

arXiv.org Artificial Intelligence

Action Quality Assessment (AQA) -- the task of quantifying how well an action is performed -- has great potential for detecting errors in gym weight training, where accurate feedback is critical to prevent injuries and maximize gains. Existing AQA datasets, however, are limited to single-view competitive sports and RGB video, lacking multimodal signals and professional assessment of fitness actions. We introduce FLEX, the first large-scale, multimodal, multiview dataset for fitness AQA that incorporates surface electromyography (sEMG). FLEX contains over 7,500 multiview recordings of 20 weight-loaded exercises performed by 38 subjects of diverse skill levels, with synchronized RGB video, 3D pose, sEMG, and physiological signals. Expert annotations are organized into a Fitness Knowledge Graph (FKG) linking actions, key steps, error types, and feedback, supporting a compositional scoring function for interpretable quality assessment. FLEX enables multimodal fusion, cross-modal prediction -- including the novel Video$\rightarrow$EMG task -- and biomechanically oriented representation learning. Building on the FKG, we further introduce FLEX-VideoQA, a structured question-answering benchmark with hierarchical queries that drive cross-modal reasoning in vision-language models. Baseline experiments demonstrate that multimodal inputs, multiview video, and fine-grained annotations significantly enhance AQA performance. FLEX thus advances AQA toward richer multimodal settings and provides a foundation for AI-powered fitness assessment and coaching. Dataset and code are available at \href{https://github.com/HaoYin116/FLEX}{https://github.com/HaoYin116/FLEX}. Link to Project \href{https://haoyin116.github.io/FLEX_Dataset}{page}.