OmniField: Conditioned Neural Fields for Robust Multimodal Spatiotemporal Learning
Valencia, Kevin, Balasooriya, Thilina, Luo, Xihaier, Yoo, Shinjae, Park, David Keetae
–arXiv.org Artificial Intelligence
Multimodal spatiotemporal learning on real-world experimental data is constrained by two challenges: within-modality measurements are sparse, irregular, and noisy (QA/QC artifacts) but cross-modally correlated; the set of available modalities varies across space and time, shrinking the usable record unless models can adapt to arbitrary subsets at train and test time. We propose OmniField, a continuity-aware framework that learns a continuous neural field conditioned on available modalities and iteratively fuses cross-modal context. A multimodal crosstalk block architecture paired with iterative cross-modal refinement aligns signals prior to the decoder, enabling unified reconstruction, interpolation, forecasting, and cross-modal prediction without gridding or surrogate preprocessing. Extensive evaluations show that OmniField consistently outperforms eight strong multimodal spatiotemporal baselines. Under heavy simulated sensor noise, performance remains close to clean-input levels, highlighting robustness to corrupted measurements.
arXiv.org Artificial Intelligence
Nov-5-2025
- Country:
- Asia > Japan
- Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- North America > United States
- Massachusetts > Suffolk County
- Boston (0.04)
- New Jersey > Hudson County
- Hoboken (0.04)
- Massachusetts > Suffolk County
- Asia > Japan
- Genre:
- Research Report (1.00)
- Industry:
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning (1.00)
- Representation & Reasoning (0.93)
- Vision (0.67)
- Data Science (1.00)
- Artificial Intelligence
- Information Technology